report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
NAHASDA authorized two HUD-administered programs—IHBG and Title VI—that aim to provide affordable housing assistance to Native Americans living on or near Indian tribal lands or areas, including assistance for housing-related infrastructure. NAHASDA was first funded in fiscal year 1998 and was most recently reauthorized in 2008. Prior to NAHASDA, Native Americans received assistance for affordable housing under various programs aimed at providing housing assistance to low-income families. For example, several of the programs were authorized by the 1937 Act, including housing development and modernization grants, operating subsidies, and Section 8 rental assistance. Prior to NAHASDA, there were no specific provisions relating to the unique circumstances of Native Americans living on or near tribal lands, such as the federal government’s obligations to Native Americans through treaties and legislation, the relationships between sovereign governments (federal and tribal) with different laws, and the challenges with development on trust lands. When NAHASDA was enacted in 1996, it incorporated the major programs that served Native Americans into a single block grant program (the IHBG program). NAHASDA also created the Title VI program. The IHBG program is a formula grant program that provides funding for affordable housing activities to Native American tribes or tribally designated housing entities (TDHE). The purpose of the Title VI program is to assist IHBG recipients that are unable to obtain financing for eligible affordable housing activities without a federal guarantee. Through the IHBG and Title VI programs, NAHASDA aims to accomplish the following statutory objectives: assist and promote affordable housing activities to develop, maintain, and operate affordable housing in safe and healthy environments on Indian reservations and in other Indian areas for occupancy by low-income Indian families; ensure better access to private mortgage markets for Indian tribes and their members and promote self-sufficiency of Indian tribes and their members; coordinate activities to provide housing for Indian tribes and their members with federal, state, and local activities to further economic and community development for Indian tribes and their members; plan for and integrate infrastructure resources with housing development for Indian tribes; and promote the development of private capital markets in Indian country for the benefit of Indian communities. NAHASDA, as described in the statute, is “o provide federal assistance for Indian tribes in a manner that recognizes the right of tribal self- governance, and for other purposes.” Under NAHASDA, tribes practice self-governance or self-determination through (1) negotiated rulemaking, (2) receiving funding directly rather than through Indian Housing Authorities, and (3) determining the details of their housing programs. Negotiated rulemaking is the process whereby an agency considering drafting a rule brings together representatives of that agency and affected parties for negotiations, consistent with the Negotiated Rulemaking Act of 1990. ONAP consults with tribes on various matters. One important element of these discussions is negotiated rulemaking, which allows Native Americans to participate in developing regulations, including those pertaining to the IHBG allocation formula. Before NAHASDA, HUD provided most of its assistance to Native Americans through Indian Housing Authorities in the same manner as public housing. With the enactment of NAHASDA, tribes may choose to receive housing funds directly or they may designate a TDHE to administer the housing program on their behalf. Tribes and TDHEs can use IHBG funds for any eligible NAHASDA activity. Finally, under NAHASDA, tribes are able to determine (1) whom they serve (for example, giving preference to members of the participating tribe); (2) what types of eligible activities they offer; and (3) how they deliver their programs and projects. Entities eligible for NAHASDA programs are federally recognized Indian tribes or their TDHEs and a limited number of state recognized tribes that were funded under the 1937 Act. Families that are eligible for NAHASDA- funded assistance are low-income Indian families—defined as Indian families whose income does not exceed 80 percent of the area median income—residing on a reservation or in an Indian area. Further, NAHASDA requires that dwelling units be occupied, owned, leased, purchased, or constructed by low-income families and that the dwelling units remain affordable for the remaining useful life of the property. According to HUD’s 2009 IHBG formula allocation data, 282,111 American Indian and Alaska Native (AI/AN) households residing in NAHASDA formula areas were low-income. Under NAHASDA, there are seven eligible activities: 1. Indian housing assistance, i.e. modernization or operating assistance for 1937 Act units; 2. housing development, including the acquisition, new construction, and reconstruction or rehabilitation of affordable housing; 3. housing services, including housing counseling and assistance to owners, tenants, and contractors involved in eligible housing activities; 4. housing management services for affordable housing, including loan processing, inspections, and tenant selection; 5. crime prevention and safety; 6. model activities that provide creative approaches to solving affordable 7. reserve accounts for administrative and planning activities related to affordable housing. Under NAHASDA, grantees can use a range of approaches to provide homeownership and rental assistance. These include providing homeownership units for purchase or lease-purchase through new construction, acquisition (for example, purchase of existing units), rehabilitation, or acquisition and rehabilitation; rental units through new construction, acquisition (for example, purchase of existing units), rehabilitation, or acquisition and rehabilitation; rental units through conversion of existing structures or demolition and replacement of existing structures; homeownership assistance through acquisition (for example, downpayment or closing cost assistance to the homebuyer) or acquisition and rehabilitation; and tenant-based rental assistance (residents pay up to 30 percent of their adjusted income). Grantees also can leverage NAHASDA funds by combining them with funds from other federal, state, local, and private sources to support eligible program activities. According to HUD, leveraging was not common under the 1937 Act. Since the enactment of NAHASDA, several legislative and regulatory changes have occurred (see fig. 1). Those changes include the creation of the Native Hawaiian Housing Block Grant program in 2000 and the use of grant funds for housing-related community development activities. Funding for the IHBG program has remained steady. NAHASDA’s first appropriation in fiscal year 1998 was $592 million, and average funding was approximately $633 million between 1998 and 2009. The highest level of funding was $691 million in 2002, and the lowest was $577 million in 1999. For fiscal year 2009, the program’s appropriation was $621 million. However, when accounting for inflation, constant dollars have generally decreased since the enactment of NAHASDA. The highest level of funding in constant dollars was $779 million in 1998, and the lowest was $621 million in 2009. Amounts cited above and in figure 1 are for the IHBG program and exclude NAHASDA set-asides such as technical assistance and Title VI funding. ONAP, which administers NAHASDA, is part of HUD’s Office of Public and Indian Housing and administers the Indian Community Development Block Grant and the Section 184 Indian Home Loan Guarantee programs. ONAP’s headquarters in Washington, D.C. and its Denver office direct the administration of the IHBG program on the national level, while six regional offices administer grants on the local level. Each regional office contains two divisions: Grants Management, which provides funding, technical assistance, and project support to grantees; and Grants Evaluation, which reviews grantees’ performance and initiates enforcement procedures when necessary. NAHASDA changed HUD’s role and involvement in Native American housing. Prior to NAHASDA, HUD had greater involvement in the development of housing projects while also managing multiple programs that served Native Americans. Several of the programs were competitive, and HUD reviewed and scored project proposals for those programs and awarded grants to the highest-ranked projects, in addition to distributing funds through the other noncompetitive (formula-based) programs. Under the competitive programs, HUD had greater influence over how funds were spent. Under NAHASDA, HUD plays a more administrative role in delivering housing benefits to Native Americans, providing funding through a single, tribally negotiated grant allocation formula. HUD’s role is (1) to provide grants, loan guarantees, and technical assistance to Indian tribes and Alaska Native villages for the development and operation of low-income housing in Indian areas; (2) to conduct oversight by ensuring that reporting requirements are met and by monitoring grant recipients onsite; and (3) to enforce remedies for noncompliant grant recipients. Prior to NAHASDA, HUD distributed grants from multiple programs to 217 Indian Housing Authorities. Under NAHASDA, in fiscal year 2008, 535 tribes benefited from more than 350 IHBG grants. The amount of funding is based on an allocation formula that has two components: (1) the costs of operating and modernizing pre-NAHASDA HUD-funded units and (2) the need for providing affordable housing activities. Need is calculated based on seven different factors that include the grantee’s AI/AN population and the number of households within that population that fall in certain low-income categories (see fig. 2). Allocation amounts are adjusted by local area costs for construction and rents. Because population impacts all need factors in the grant allocation, larger grantees (larger tribes operating their own housing programs or the TDHEs representing those tribes) receive larger grants. Additionally, grantees that own and operate pre-NAHASDA units receive both portions of the grant while those without the pre-NAHASDA units receive only the need portion. Since their inception, NAHASDA’s regulations have included a provision for minimum funding. Tribes whose annual need allocation was less than $50,000 in their first year of participation or less than $25,000 in subsequent years have received minimum funding in those amounts (the allocation is to the tribe, although the grantee might be a separate entity operating the housing program). The minimum funding allocation was revised for fiscal year 2008. In fiscal year 2008, individual grants ranged from the minimum to more than $70 million. According to the data HUD uses annually for the IHBG formula allocation, each of the need factors has increased from 1999 to 2009. For example, during this period, the number of AI/AN households living in overcrowded units and units lacking kitchen facilities increased by almost 10 percent, and the number of AI/AN households with housing expenses greater than 50 percent of their income increased by 43 percent. In order to receive their grant distribution, grantees must submit an Indian Housing Plan (IHP) for each program year. In the IHP, grantees identify their affordable housing needs and describe the housing activities they plan to pursue to address those needs. At the end of the program year, grantees also must submit an Annual Performance Report (APR) that outlines actual accomplishments and, if federal fiscal year expenditures are $500,000 or more, the results of an independent audit. HUD is modifying its reporting process and plans to implement a combined IHP and APR with several revisions in fiscal year 2011. In addition to reporting, grantees must follow requirements for environmental reviews, procurement and labor standards, family eligibility, and accounting for program income. As part of its oversight, HUD also conducts periodic onsite monitoring visits with grantees using a risk-based approach to select which grantees it will visit each year. Risk factors include grant size and the amount of time since a grantee’s last visit. In fiscal year 2009, ONAP completed 60 onsite monitoring visits with NAHASDA grantees nationwide. Additionally, HUD has enforcement procedures for grantees found to be noncompliant with program requirements. Enforcement procedures involve issuing (1) a letter of warning, (2) a notice of intent to impose remedies if there is continued noncompliance, and (3) imposition of remedies, which includes the option of a hearing before a hearing officer. Enforcement can be discontinued at any time if the grantee corrects the violation prior to the imposition of remedies. Native American tribes receiving NAHASDA grants have used the funds to develop new housing and to provide other types of housing assistance. However, fewer small grantees, which receive lesser grants, have developed new housing with NAHASDA funds compared to those receiving larger grants, even though it is the primary federal housing program for Native Americans. Many NAHASDA grantees, including those receiving lesser grants, reported providing tenant-based rental assistance, housing counseling, and downpayment assistance. Smaller grantees— those receiving less than $250,000 annually—often focus on providing those services. The APR that HUD currently uses to track the use of grant funds does not collect data on activities that are not unit-based (directly involving housing units built, acquired, or rehabilitated). However, HUD is revising its reporting to track more activities. Both HUD and grantees agreed that the opportunity to leverage grant funds to secure funds from other sources allows grantees to better address their affordable housing needs. However, a lack of administrative capacity and other challenges limit additional funding opportunities for some grantees. In recent years, Native American tribes and TDHEs receiving IHBG funds under NAHASDA have used the funds to build, acquire, and rehabilitate affordable housing units and to provide other types of housing assistance, such as tenant-based rental assistance, housing counseling, and downpayment assistance to eligible tribal members. During fiscal years 2003 through 2008, NAHASDA grantees collectively used IHBG funds to build 8,130 homeownership and 5,011 rental units; acquire 3,811 homeownership and 800 rental units; and rehabilitate 27,422 homeownership and 5,289 rental units (see fig. 3). HUD tracks the number of units that grantees build, acquire, and rehabilitate using IHBG funds each fiscal year through the grantees’ APR. Grantees use the APR, which serves as a self-assessment document, to report on their use of grant funds at the end of each program year. The APR also follows the Indian Housing Plan (IHP), which grantees submit to HUD each program year to describe their affordable housing needs and how they will use grant funds to address those needs. Between 2003 and 2008, grantees developed more homeownership units than rental units with IHBG funds. For example, the number of homeownership units built was more than one and one-half times the number of rental units built; the number of homeownership units acquired was almost five times the number of rental units acquired; and the number of homeownership units rehabilitated was more than five times the number of rental units rehabilitated. National American Indian Housing Council board members that also serve as executive directors for tribal housing entities nationwide told us that while large-scale rental housing is often needed, such properties are very expensive to maintain over time. They said that, as a result, the associated costs provide a disincentive for tribes to develop this type of housing. For example, the housing director of one small grantee we visited showed us the tribe’s new senior apartment community, which was funded in part by NAHASDA (see fig. 4). During our visit, the director explained that he spends much of his own time carrying out maintenance services at the facility. Among NAHASDA grantees, during fiscal years 2003 through 2008, the most common development activity was rehabilitation of existing units, particularly homeownership units. In each fiscal year, the number of homeownership units rehabilitated was substantially greater than the number of homeownership units built or acquired. Grantees can use IHBG funds to rehabilitate units owned by the tribe or TDHE or units owned by private entities that will be occupied by eligible members, or they can provide the funds to eligible homeowners for rehabilitation. In addition to these unit-based activities, many grantees, including several of those we interviewed, have used IHBG funds to provide tenant-based rental assistance, housing or financial literacy counseling, and downpayment assistance to eligible individuals and families. Based on the results of our survey of all grantees for 2008, in fiscal years 2008 and 2009, approximately 50 percent of grantees used IHBG funds to provide tenant- based rental assistance; more than 50 percent used IHBG funds to provide housing or financial literacy counseling; and approximately 30 percent used IHBG funds to provide downpayment assistance (see table 1). Grantees that have housing stock developed with 1937 Act program funds, or pre-NAHASDA housing stock, also can use IHBG funds to provide modernization and operating assistance for those housing units. In fiscal year 2008, HUD allocated IHBG funds to support modernization or operation of 57,523 pre-NAHASDA units that grantees collectively maintained in their housing inventories. HUD also tracks modernization or operation of pre-NAHASDA units in the APR. HUD and tribes we interviewed and surveyed reported that small grantees, which receive lesser grants, face particular challenges in building new housing units with IHBG funds. The minimum grant amount in fiscal year 2008 was $48,660 ($49,715 for fiscal year 2009). For the purposes of our review, we generally considered annual grants less than $250,000 to be lesser grants and the grantees receiving those grants to be small grantees. In fiscal year 2008, 102 out of 359 grantees received grants of less than $250,000 to maintain existing housing, develop new housing, and pursue other eligible activities under NAHASDA (see fig. 5). Out of 227 grantees providing a survey response to whether they had built new housing units using any IHBG funds since participating in NAHASDA, 159 (70 percent) indicated that they had built at least one unit (see table 2). Of the 22 small grantees in this group (those receiving less than $250,000 in fiscal year 2008), the average number of units was considerably small. The three grantees receiving less than $50,000 built an average of four units over the life of their participation in the program, compared with an average 12 units for the 19 grantees that received between $50,000 and $250,000 in 2008. The larger grantees have built the majority of units. The 22 grantees that responded to this survey question and received $1 million or more in fiscal year 2008 built, on average, almost 450 housing units each. Among the 12 grantees we interviewed, 4 received less than $250,000 in 2008, and only 1 of the 4 received more than that amount in any of its prior years in the program. Of those 4 grantees, only 1 had developed new housing with IHBG funds. During our visit, the housing director showed us a 10-home development that was completed in 2006 with IHBG and other funding, including funding from another HUD program. Although the grantee completed its first IHBG-funded development in 2006, it has participated in NAHASDA since the program’s inception in fiscal year 1998. One other small grantee we interviewed also had developed new housing, but not with IHBG funds. During our visit with the second grantee, the tribal administrator explained that the grantee’s newest units were funded with 1937 Act funds it received from HUD just before NAHASDA’s implementation. That grantee also has participated in NAHASDA since its inception. Development of new housing can be difficult for smaller grantees receiving lesser grants. ONAP officials and several grantees we interviewed stated that new housing development with lesser grants or minimum funding can take place only if the funds are accumulated over several years or if development is done in phases or on a smaller scale (see fig. 6). In many cases, new development is possible for those grantees only if IHBG funds are leveraged (combined with funds from other sources), a process which can involve additional challenges, which we will discuss later in this report. In our survey, we asked all respondents to suggest best practices or effective strategies for grantees receiving less than $250,000 in annual IHBG funds. Similar to what ONAP officials and the grantees we interviewed said, many survey respondents suggested that grantees receiving lesser grants pursue phased housing development, leveraging, and small-scale development. Additionally, several respondents suggested that the following actions can be helpful: pool their resources (grant funds, staff, and expertise) with other small grantees, such as under an umbrella TDHE or informally, and rotate new development among grantees; minimize administrative expenses (for example, by limiting staff to those that have experience with housing programs or in grant writing) or work with consultants; focus on small projects critical to the community, such as housing rehabilitation or home maintenance, or on providing only rental assistance and downpayment assistance; and network with and seek technical assistance from other tribes, agency officials, or NAIHC. Further details on survey respondents’ suggestions for grantees receiving less than $250,000 annually are discussed in Appendix II. The APR that HUD uses to collect data on grantees’ use of IHBG funds does not track several significant activities because HUD currently tracks only unit-based activities in the APR. Grantees report on the number of units they build, acquire, and rehabilitate as well as on the number of pre- NAHASDA units they operate and modernize using IHBG funds. However, they are not required to report on the number of individuals or households that receive tenant-based rental assistance, housing counseling, or downpayment assistance. Grantees can include this type of information as narrative in the APR, but HUD does not track it. As a result, it is not included in HUD’s annual report to Congress on program accomplishments for NAHASDA. Since HUD currently does not track and report IHBG-funded activities that are not unit-based, smaller grantees that receive lesser grants and either have not developed new housing or have done so over several years also have not been able to adequately demonstrate their use of IHBG funds. Those grantees often focus on providing members with services such as rental and downpayment assistance. In addition to limitations in the information it captures, the current APR is a multiyear report that requires grantees to report on multiple fiscal years when prior year funds remain unspent. At present, a grant remains open across fiscal years until the funds from that grant are fully expended. Both ONAP officials and officials for two grantees we interviewed stated that multiyear reporting can be confusing and can reduce the accuracy of the data being reported. For example, ONAP headquarters officials explained that some grants used to fund construction have remained open for several years concurrently. According to the officials, this confuses grantees’ reporting on use of funds as well as HUD’s administrative process. ONAP has begun revising the APR and plans to implement the revised format in fiscal year 2011. The revised APR is expected to be a single-year report, which should eliminate multiyear reporting inconsistencies. Additionally, HUD’s planned revisions should allow grantees to report on activities beyond housing units built, acquired, and rehabilitated and demonstrate greater impact relative to those units—for example, the number of students or elderly households assisted, or the number of individuals moved into housing from homelessness or substandard housing conditions. The new format also will expand general reporting categories because HUD plans to track tenant-based rental assistance, downpayment and closing-cost assistance, and homebuyer lending subsidies. Finally, the IHP and APR will be a combined document, which HUD believes will further simplify reporting. Measures that address a full range of activities should help tribes receiving lesser grants to better demonstrate how and to what extent NAHASDA funds are helping them meet their affordable housing needs. Additionally, a more complete set of program measures should help HUD and Congress better assess the extent of NAHASDA’s impact on low-income Native Americans and whether the program is a significant improvement over the programs it replaced. HUD and tribes participating in NAHASDA agreed that the opportunity to leverage IHBG funds with funds from other sources, a key component of NAHASDA, allows grantees to better address their affordable housing needs. HUD officials told us that the opportunity to leverage IHBG funds to support affordable housing activities is a significant benefit for tribes participating in NAHASDA, and a positive change for Native American housing since leveraging was not common under 1937 Act programs. Moreover, an official in one regional ONAP office described leveraging as a core concept of NAHASDA. While leveraging HUD funds was allowed prior to NAHASDA, two separate regional officials explained that leveraging still had been a relatively new concept for HUD and tribes since the public housing structure under which tribes previously received assistance did not encourage leveraging. NAHASDA requires that grantees explain how grant funds will allow them to leverage additional resources in their annual IHP. As part of self-determination, grantees prioritize how they use grant funds to address a variety of housing needs that qualify as eligible NAHASDA activities. With regard to leveraging IHBG funds, ONAP officials in one region stated that none of the grantees in that region was so successful or received so large a grant that it did not need additional support to address its housing needs. In several regional offices, the officials told us that they provide resources to assist grantees with leveraging. Some offices had a staff member who was dedicated to helping grantees identify leveraging opportunities and providing them with technical assistance. However, one regional official noted that regional staff members do not assist grantees with completing applications for funding. Half of the grantees we interviewed and 48 percent (100/209) of survey respondents answering a question on the role leveraging plays in their ability to fund affordable housing development and activities said that it plays a great or very great role. Many of the grantees participating in leveraging activities explained that IHBG funding alone is insufficient to adequately address their communities’ affordable housing needs. The housing director of one large grantee we visited told us that, instead of using IHBG funds independently to support housing activities, he focused on leveraging the funds to obtain additional support. He said he viewed NAHASDA as opening up the opportunity for tribes to use as many resources as possible to fund their housing needs. The housing director of a second large grantee we visited explained that nearly all of the grantee’s 70-plus IHBG-funded units had been built with a combination of funds from the IHBG program and other sources. A third housing director provided us with records showing that, since NAHASDA’s inception in 1998, the grantee had leveraged its IHBG funds to secure additional funding for housing development at an almost one-to-one ratio. Survey respondents leveraging their IHBG funds provided similar comments. They stated that leveraging is necessary either to fully fund a development project; to pursue both development and rehabilitation; to build multiple housing units; or to generally address their communities’ affordable housing needs. Some respondents offered examples of how combined funding from the IHBG program and other sources allowed them to address specific housing needs, including funding a new housing rehabilitation program for members, purchasing units to address overcrowding and homelessness, and providing homebuyer assistance. Based on our interviews with NAHASDA grantees and on survey responses grantees provided, we found that grantees generally use the Indian Community Development Block Grant (ICDBG) program and the Section 184 Indian Home Loan Guarantee program in combination with the IHBG program to fund affordable housing activities. Both the ICDBG and Section 184 are HUD programs. Some grantees also use programs provided by the U.S. Department of Agriculture (USDA) Rural Development, and some larger grantees use the Low-Income Housing Tax Credit program (see table 3). Of the 12 grantees we interviewed, 8 told us that either the TDHE or the tribe itself had received competitively awarded grants through the ICDBG program and used those grants to fund a variety of projects. In addition to housing development, the projects included providing water and sewer systems; community buildings, such as a foster care facility and public safety building; flood protection for homes; and small business incubators (see fig. 7). Among survey respondents, participation in the ICDBG program in individual ONAP regions was between 60 and 95 percent, with the highest participation rate in the Northern Plains region, followed by the Northwest region. Five of the 12 grantees we interviewed said they had used the Section 184 loan program, and two others said they would consider the program in future leveraging efforts. Among survey respondents, participation in the Section 184 loan program in individual ONAP regions was more varied, from 39 to 85 percent, with the highest participation rate in the Northern Plains region, followed closely by the Northwest region. Although USDA’s Rural Development provides low-income housing assistance through several programs for which Native American tribes or their members are eligible, few NAHASDA grantees use the three USDA programs we asked about in our survey. Several grantees we interviewed said they participated in at least one USDA program in combination with the IHBG program; however, overall numbers from our survey show that only 10 percent of grantees reported using USDA’s Section 515 program (Rural Rental Housing) and 18 percent reported using the Section 502 program (Single-Family Housing). USDA Rural Development officials told us they were surprised the figures were so low, especially given that Native American areas (along with the Mississippi Delta, Appalachia, and the Colonias on the Mexican border) are among the primary areas they target in order to serve some of the poorest and worst housed groups in the nation. They also said that they have set-asides under the Section 515 program for new construction on tribal lands. Several of the grantees we interviewed told us that they had little or no interaction with USDA local field office officials, and when they did, it was usually at their tribe’s initiative. For example, one small grantee’s housing director said that he was aware that some USDA programs might benefit his tribe, but that he had not had any contact with officials at the local USDA office, even though the office is about one hour away. USDA also told us that they are currently developing a more targeted outreach strategy that identifies tribal housing authorities as critical intermediaries and partners in raising the visibility of USDA Rural Development’s programs in Indian country. Some larger grantees also use Low-Income Housing Tax Credit (LIHTC) programs in combination with the IHBG program to fund affordable housing activities (see figs. 8 and 9). For each ONAP region, we interviewed two grantees whose population and grant size varied widely. As such, of the 12 grantees we interviewed, we make references to the “six smaller” or “six larger” grantees. Five of the six larger grantees we interviewed said that they currently use or had used LIHTC programs. Similarly, 96 percent of survey respondents who said they participate in LIHTC programs received grants of at least $250,000 in fiscal year 2008. According to HUD officials and the grantees we interviewed, some grantees are limited in their ability to seek additional funds, including those that (1) have limited administrative resources, which prevents them from participating in a variety of programs; (2) are too small to qualify for LIHTC programs, which may require the development of a minimum number of housing units to serve a significant proportion of the low- income population; and (3) undergo frequent administrative turnover. Additionally, though most of the grantees agreed that leveraging their IHBG funds by combining them with funds from other sources is beneficial, most grantees participating in multiple programs were larger grantees. And, among the 48 percent of survey respondents indicating that leveraging plays a great or very great role in their ability to fund affordable housing and related activities, 81 percent received at least $250,000 in IHBG funds in 2008. Of the respondents indicating that leveraging plays some, little, or no role in funding affordable housing, 66 percent reported having 5 or fewer persons on their housing staff (staff that manage, administer, and prepare grants or reports for the grantee’s housing program). All six of the smaller grantees we interviewed said they lacked some aspect of administrative capacity (such as housing staff resources, expertise, and time), which limits or prevents their participation in other programs or their ability to compete for non-NAHASDA funds. Three of the six smaller grantees had not applied for funding from other federal agencies, and three had not applied for or had experienced challenges applying for the ICDBG program, though ICDBG participation is high among grantees overall. Two of the six grantees also had not applied for the competitive portion of NAHASDA stimulus funds due to time constraints or to not having a grant writer to prepare a competitive proposal. Grantees we interviewed and those responding to our survey also reported that burdensome administrative requirements impact their ability to participate in NAHASDA and other housing programs (see fig. 10). The grants planner for one small grantee we visited said his tribe declined the IHBG grant one year because it determined the grant amount would not justify the effort and cost of participating in the program. The housing director of another small grantee explained that while leveraging offers the ability to stretch dollars, without enough funding to pay for the necessary staff resources, it is very difficult to take on the extra burden of making different funding sources work together. As noted, five of the six larger grantees we interviewed indicated they had participated in LIHTC programs. However, none of the smaller grantees we interviewed indicated that they had participated in LIHTC programs, and two explained that such programs would require more resources than were available to them. For example, the housing director of one of the smaller grantees said that they had considered participating in a LIHTC program, but found they could not undertake the required scale of development. HUD data supports this assessment. According to HUD, of 16,754 LIHTC projects placed into service between 1995 and 2006, only about 17 percent of the projects had 20 or fewer units. Limited resources mainly impact smaller grantees that receive lesser grants, but grantees of any size may experience frequent turnover in housing and management staff that affect the continuity of housing plans and activities. One housing director explained that frequent turnover in housing management and staff can contribute to a lack of knowledge of implementing housing programs and lack of consistency in the grantee’s housing plan. Incompatibility among different funding programs was cited by 68 percent of survey respondents as a challenge to leveraging. Some other funding programs may be incompatible with the IHBG program due to conflicting requirements, such as requirements for eligible beneficiaries. In addition, 68 percent of survey respondents identified lack of coordination between agencies providing funding as a leveraging challenge. Several grantees we interviewed reported that a lack of coordination between HUD and other funding agencies limits their efforts to combine IHBG funds with funds from those other agencies. For example, the grantees explained that, like HUD, various agencies require grantees to complete environmental reviews when they receive funds to develop housing and related infrastructure. However, they said that HUD generally does not accept environmental reviews that meet other agencies’ requirements, making it necessary for them to have multiple reviews carried out. Officials from NAIHC and three grantees we interviewed also reported that limited interest from financial institutions is an ongoing challenge for tribal entities in obtaining financing for housing development. They said that many banks are reluctant to do business with tribes because of cumbersome procedures or lack of experience. For example, they explained that the Bureau of Indian Affairs’ (BIA) process for issuing land title or trust status reports when a mortgage is made on trust lands is lengthy and inefficient. Several grantees explained that BIA’s process for issuing this paperwork can take months or years, making such transactions impractical for lenders and difficult for members pursuing homeownership or receiving homeownership assistance. In 1998 we reported that from 1992 through 1996, lenders made only 91 conventional home purchase loans to Native Americans on trust lands (80 of which went to members of only two tribes), largely because lenders have a limited understanding of land ownership, jurisdiction, and legal issues pertaining to Native American trust lands. A more recent source notes that while federal programs and other efforts subsequently encouraged greater lending to Native Americans on trust lands, challenges remain. Both the Section 184 and NAHASDA’s Title VI loan guarantee programs aim to provide an incentive for private lenders to make housing loans to tribes and their members. In comparison with the Section 184 program, participation in Title VI was low among grantees we interviewed and those responding to our survey. Two of the 12 grantees we interviewed and 17 percent (33/199) of survey respondents said they had participated in Title VI. Data we received from ONAP on both loan programs support what we found on Section 184 and Title VI participation. In fiscal year 2008, HUD provided guarantees for 1,577 Section 184 loans totaling $274.8 million compared with only 8 Title VI loans totaling $14.2 million. And, in fiscal year 2009, HUD provided guarantees for 2,401 Section 184 loans totaling $395.4 million compared with only 6 Title VI loans totaling $12.8 million. However, Title VI is a newer loan program and it offers lenders a 95 percent federal guarantee, compared with the Section 184 program’s 100 percent guarantee. Some individual grantees also have made efforts to facilitate lending in their communities. For example, one grantee we met with had an agreement with BIA to do title permitting onsite in order to expedite the title process for Section 184 program loans, and two other grantees we interviewed had established their own banks. Grantees responding to our survey and those we interviewed generally viewed NAHASDA as an effective affordable housing program and as an improvement over the programs it replaced. A primary reason was that NAHASDA emphasizes tribal self-determination, which is the right to use grant funds with minimal restrictions to meet tribes’ self-identified housing needs. Survey respondents reported that they view NAHASDA as most effective at providing homeownership opportunities and improving housing conditions for low-income Native Americans. However, some grantees we spoke with and some responding to our survey had specific concerns about NAHASDA, such as problems with meeting what they considered to be onerous regulatory requirements and perceived inequities in the grant allocation formula. Negotiated rulemaking between HUD and tribes participating in NAHASDA provides the tribes with an opportunity to address their concerns with NAHASDA’s regulations, including concerns pertaining to the grant allocation formula. Based on our survey of and interviews with NAHASDA grantees, most grantees view NAHASDA as an effective low-income housing program, and a primary reason was NAHASDA’s recognition of tribal self-determination. Of the 223 survey respondents that provided views on NAHASDA’s effectiveness, almost 90 percent (200 grantees) reported that the program has had a positive effect in helping them to meet their affordable housing needs (see table 4). Of those 200 grantees, more than half (110 grantees) reported that NAHASDA has had a very positive effect. Similarly, 8 of the 12 grantees we interviewed told us that NAHASDA has simplified the process of providing housing benefits for their tribes. However, 5 of the 8 grantees who said that NAHASDA has simplified the process of accessing affordable housing benefits also mentioned some cumbersome aspects to the program, such as the reporting requirements. Among survey respondents, there were some minor differences in the results across grantees receiving grants of various sizes. Of the 47 survey respondents that provided views on NAHASDA’s effectiveness and that received a grant less than $250,000 in fiscal year 2008, 46 had a generally positive or very positive view of NAHASDA’s effectiveness (see table 5). In contrast with this consistently positive view of NAHASDA among respondents that received lesser grants, eight grantees that received more than $250,000 reported negative views on NAHASDA’s effectiveness. Our analysis of survey respondents’ written explanations shows that some grantees preferred the 1937 Act housing programs because they were able to successfully compete for funds. Officials we spoke with at NAIHC said that larger tribes with sophisticated housing departments were more likely to view NAHASDA as less effective than the 1937 Act programs it replaced because they may receive less funding under the block grant formula. We asked survey respondents to provide explanations to support their overall views of NAHASDA, and most that viewed NAHASDA positively wrote that the program helps them meet their overall affordable housing needs. However, of those that provided more specific reasons for NAHASDA’s positive impact, most respondents mentioned that the program has been effective because it allows the grantee to target specific housing needs for their tribe, such as increasing energy efficiency in affordable units (56 responses); exercise self-determination and program flexibility (19 responses); and leverage their NAHASDA grant with funding from other sources (10 responses). We also surveyed grantees on the extent to which they thought NAHASDA was effective at meeting certain programmatic goals. We found that survey respondents viewed NAHASDA as very effective at improving housing conditions and increasing access to affordable rental housing and homeownership, but less effective at developing housing finance mechanisms and increasing economic development on Indian lands (see fig. 11). One survey respondent primarily operating in an urban area wrote that under NAHASDA, they have been able to develop mixed-use housing in their region and have been able to supplement the housing they provide with social support services. Similarly, another survey respondent wrote that under NAHASDA, they have been able to maintain existing housing units and provide financial literacy training to the community as well as counseling to aid prospective homeowners in the tribe. We asked survey respondents to compare their experiences under NAHASDA with experiences under the 1937 Act housing programs it replaced in 1998. Of the 138 respondents that checked that they had participated in the 1937 Act programs, 102 grantees—or about 74 percent—reported that NAHASDA is an improvement over the programs it replaced. Of those that viewed NAHASDA as an improvement, about half—53 out of 102—checked that NAHASDA was much better. Only 17 reported that NAHASDA was worse or much worse than the 1937 Act programs it replaced (see table 6). The grantees we interviewed also viewed NAHASDA as an improvement over 1937 Act housing programs, and all of them identified self- determination as the main reason. For example, one grantee we interviewed said that because of the flexibility afforded by NAHASDA, their tribe was able to buy housing units in urban areas rather than on their reservation and rent the units to low-income members. This grantee explained that they intended to provide housing in locations that had more opportunities for employment so that program beneficiaries could become increasingly self-sufficient. Self-determination was also the most common reason that survey respondents favored NAHASDA over the previous housing programs. The 102 survey respondents reporting that NAHASDA was better or much better provided 85 written reasons for their answers. We analyzed their responses and found that the largest group—65 responses—said that NAHASDA was an improvement because it provided for tribal self- determination. For example, one survey respondent wrote that each tribe has unique housing needs influenced by their specific cultures, economic conditions, and physical environments and that NAHASDA has been a drastic improvement because it allows tribes the flexibility to meet those needs. Another respondent wrote that although the funding levels have effectively dropped with NAHASDA, the program has allowed tribes to be more flexible with how they spend the grant, allowing for a more effective use of the limited funding. For those survey respondents that provided explanations on how NAHASDA was worse or much worse than the programs it replaced, the main reasons provided were NAHASDA provides less funding than previous programs (six responses) NAHASDA is a block grant, which does not reward those tribes that have the capacity to apply for and win competitive housing grants (five responses); and NAHASDA has too many administrative and regulatory requirements (five responses). Respondents to our survey provided a total of 133 distinct recommendations on how to improve NAHASDA, and most of the respondents wrote that certain administrative rules and obligations were too onerous (see table 7). Most commonly, those respondents cited mandatory environmental reviews as overly cumbersome. Others noted certain administrative restrictions on their funds; for example, some said that the cap on the portion of the grant they can use for administrative expenses was arbitrary and limited their administrative capacity. Specifically, one respondent wrote that determining the amount spent on administrative costs should be up to each tribe so that tribes can manage their own programs as they see fit. Grantees we interviewed identified limitations in the grant allocation formula as a particular challenge with the IHBG program. They told us that they believe the allocation formula is either based on inaccurate data (for example, enrollment numbers or area construction costs) or does not consider certain key factors, such as a lack of land to develop housing. In calculating a grantee’s annual allocation, the formula considers such factors as fair market rent and total development cost for a grantee’s local area. However, the formula does not take into account whether a tribe has buildable land to use for housing development in the calculation of total development cost or as a separate factor. The housing director of one small grantee we visited that did not own trust land reported that they had to first allocate grant funds to purchase land for any new development. Similarly, of the 201 survey respondents that provided an opinion specifically on the grant allocation formula, 159 grantees—or nearly 80 percent—said the formula could be improved. And, of those survey respondents that checked certain problems with utilizing NAHASDA, 46 percent of respondents said that the formula is based on inaccurate data, and 64 percent said that it does not consider certain factors such as properly accounting for construction costs or the cost of purchasing land for development. Survey respondents provided a total of 159 suggestions on how the IHBG allocation formula could be improved, and most recommended that the demographic data used in determining the need portion of the grant be updated (see table 8). For example, multiple survey respondents said that U.S. Census figures do not accurately reflect the population for which they provide housing services. Two of the grantees we interviewed and some survey respondents also said that the IHBG operation and maintenance subsidy that currently supports 1937 Act units should extend to NAHASDA-funded units. During our visit with one of the grantees we interviewed, the housing director explained that because their NAHASDA units are low-income, the tribe would likely need assistance with upkeep to ensure that they maintain their value. In addition, several of the 12 grantees we interviewed stated that the minimum IHBG grant of around $50,000 per year is insufficient for those who receive it to pursue any significant housing activities, especially new housing development. Some survey respondents provided similar comments about the minimum grant amount. Moreover, ONAP officials in all six regions stated that grantees receiving lesser grants, including the minimum, are limited in their ability to address their affordable housing needs. However, ONAP headquarters officials explained that tribes participate in developing regulations for the grant allocation formula, including establishing a minimum grant amount, through negotiated rulemaking with HUD. They informed us that the negotiated rulemaking committee will be convened in March 2010 to determine regulations that implement October 2008 statutory changes to NAHASDA. They also confirmed that the May 2012 committee agenda will include reviewing the allocation formula. Of the 232 NAHASDA grantees responding to our survey, 70 percent viewed investment in housing-related infrastructure—such as connecting a home to a local water supply—as a great housing need, but slightly less than half indicated that they use IHBG funds to develop infrastructure (see fig. 12). Additionally, we found that HUD does not collect grantees’ infrastructure plans or measure their investments in infrastructure for affordable homes funded by the IHBG program. According to data from the Department of Health and Human Services’ Indian Health Service (IHS), there is an acute need for sanitation-related infrastructure for Indian housing in general, and our survey indicates a significant need for adequate sanitation infrastructure for homes funded by HUD programs. Some IHS officials also told us that they have found instances where HUD homes were built with insufficient planning, taxing existing water supplies and wastewater systems. Although HUD does not collect information on the sanitation infrastructure needs for HUD homes, IHS does collect such information and, according to IHS officials, can make it available to HUD under a 2007 memorandum of understanding between the agencies. Out of 232 NAHASDA grantees that responded to our survey, 85 percent (198 grantees) reported that developing infrastructure, such as providing homes with access to drinking water, was a continuing need for their tribe. And, 70 percent (164 grantees) said that developing infrastructure was a great or very great need. Additionally, grantees that responded to our survey ranked adding or updating housing-related infrastructure 4th out of 13 greatest continuing housing needs, after constructing new units, rehabilitating existing units, and operating and maintaining units (see fig. 13). Despite this demonstrated need for infrastructure development, slightly less than half of the survey respondents—98 of the 222 who responded to this question—reported that they actually use their IHBG grant for infrastructure development (see fig. 14). Some of the grantees that responded to our survey explained that they have a pronounced need for infrastructure development and that they often do not receive enough funding to address infrastructure with the IHBG program. Of those grantees that we spoke with, smaller grantees were less likely to use the IHBG program for infrastructure, either because they do not receive enough funding to address their needs or because they provide assistance to persons living in units that are on a city- or county-funded infrastructure system. Indeed, the results of our survey show that, of the grantees receiving a large grant, twice as many used IHBG funds for infrastructure development as those receiving a small grant. Officials in four of the six regional ONAP offices, as well as half of the IHS field directors that we spoke with, said that because of the need for affordable housing for most tribes, tribal housing departments may be providing housing units without adequate infrastructure to support those units. Although the grantees we interviewed did not say that they built homes with inadequate infrastructure, six said that they use or intend to use other programs to help meet their infrastructure needs. NAHASDA emphasizes tribal self-determination by providing a noncompetitive block grant to tribes, but survey respondents that provided views on problems with the program said that the greatest problem—out of a list of six most common problems—is a lack of funding specifically for housing-related infrastructure (see fig. 15). Housing-related infrastructure development is an affordable housing activity under NAHASDA. ONAP officials, especially those in the regional offices, said that prior to NAHASDA, they worked with IHS to identify all infrastructure needs for housing developments funded by HUD. Under NAHASDA, however, tribes have the flexibility to determine the uses of their funding within the scope of eligible activities, including the extent to which they want to use the IHBG program for infrastructure development. Instead of using the IHBG program, survey respondents reported that they were more likely to fund their infrastructure development using other funding sources (see table 9). Although some tribes rely on other programs to help fund their infrastructure development, our interviews with grantees and findings from a 2003 NAIHC study indicate that non- IHBG programs for infrastructure development—such as programs administered by the Environmental Protection Agency, HUD, and USDA— have characteristics that present challenges to some tribes. For example, while the study found that the ICDBG program was a sought-after program for IHBG grantees to fund infrastructure projects, it was only available to tribes that have the administrative capacity to meet the application requirements. Furthermore, our analysis found that the ICDBG program is consistently funded at about one-tenth the level of the IHBG program, making it much smaller. In addition, since IHS is statutorily prohibited from funding sanitation facility construction projects for IHBG-funded units, some survey respondents and grantees we spoke with said that they were disappointed that IHS would not provide the sanitation infrastructure support without reimbursement from the tribe. HUD’s primary tools for monitoring grantees’ uses of IHBG funds are the IHP and the APR. In our review of the IHP, which describes grantees’ plans for the coming year, we found that it does not provide a means for HUD to systematically collect information from grantees on both their housing-related infrastructure needs and their plans to address those needs, including infrastructure for new housing construction. The IHP collects information on some of grantees’ estimated housing needs, such as the number of families who need housing because they are living in overcrowded conditions. The IHP also collects information on grantees’ plans to address those stated needs, such as by constructing new housing to alleviate the overcrowded conditions. However, although it does cover many important housing-related activities the IHP does not require grantees to describe how they intend to address any existing infrastructure deficiencies, such as a home with inadequate access to potable water. In addition, the IHP does not require grantees to describe what infrastructure development a new construction project will require and how that infrastructure will be funded. The APR, which describes grantees’ accomplishments during the past year, provides grantees the opportunity to report how they are carrying out the plans and addressing the housing needs outlined in the IHP. In our review of the APR, we found that because it is based on activities described in the IHP, it also lacks an assessment of how a tribe is meeting the infrastructure needs of its low-income members. HUD officials we spoke with confirmed that the APR does not track grantees’ infrastructure investments. Although the IHP and the APR allow grantees to describe any needs and plans—including those for infrastructure development—in a narrative format, we learned that those narratives are not included in ONAP’s reporting system, which means that these components are not used in HUD’s overall reports to Congress. Further, one grantee that we spoke with said that they do not believe HUD officials actually review the narratives or track them so they do not take the time to list activities that are not measured, such as infrastructure-related needs and plans. As previously noted, HUD is planning to combine the IHP and the APR by fiscal year 2011. We reviewed a draft of this document and found that while it does a better job of tracking grantees’ uses of NAHASDA funds, from identifying affordable housing needs to assessing the impact of completed housing development, it does not systematically assess grantees’ needs, plans, or investments related to infrastructure development. Because grantees are not required to report on or to quantify their need for and investments in infrastructure, HUD may lack the information necessary to assess the extent to which NAHASDA is meeting its statutory objectives of improving the health and safety of low-income Native Americans and integrating infrastructure resources to support housing development. Of the 98 survey respondents that reported using IHBG funds for infrastructure, the majority reported using the IHBG program to provide access to clean water and to provide for wastewater removal (see fig. 16). Similarly, grantees that we spoke with, and some responding to our survey, explained that sanitation infrastructure, such as providing access to clean drinking water and providing for the safe, reliable removal of wastewater, was an important type of infrastructure for low-income housing. According to IHS, access to adequate sanitation facilities is a vital public health issue for Native Americans. Adequate access to safe drinking water helps to stem the spread of disease, and proper wastewater removal systems help reduce the incidence of bacteria, viruses, and parasites that cause communicable diseases like typhoid and hepatitis A. Government data sources show that there is still an acute need for adequate sanitation infrastructure on Indian lands. The U.S. Census estimated that in 2008 Native American households were five times as likely to have incomplete plumbing as the rest of the population. And, according to a March 2008 draft report issued by an interagency infrastructure task force written pursuant to the United Nation’s Millennium Challenge Goals, approximately 43,800 housing units occupied by Native Americans—or about 13 percent of Native American homes— had inadequate access to safe drinking water and wastewater disposal systems in 2007. The report noted a slight improvement since the benchmark year, 2003, when there were 44,234 homes with inadequate infrastructure. However, it concluded that this rate of decrease was not sufficient to meet the U.S. government’s goal of reducing the number of Native American homes with inadequate sanitation facilities to half of the 2003 figure by 2015. HUD officials we spoke with told us that they joined this task force at its inception in 2003 and signed a memorandum of understanding with the other members of the task force, including IHS, to facilitate interagency coordination to meet the United Nations goal. The data used in the report were collected by IHS’s Sanitation Tracking and Reporting System, a database that tracks reported sanitation deficiencies for most Native American communities. Although IHS is statutorily precluded from funding sanitation construction services for HUD homes, IHS is authorized to and actively collects data on the infrastructure needs for those homes as long as the data are reported by their tribal counterparts. IHS officials we spoke with in headquarters and five of the ten officials we contacted in field offices said that they have found instances where tribes in their region have built homes using NAHASDA funding with inadequate planning for sanitation infrastructure. For example, one official told us that his field office has been contacted by individual tribal members about NAHASDA homes with inadequate sewer lines or inadequate drains. He added that, in general, tribal housing departments may feel pressure from their community to maximize the number of housing units produced and that this pressure may lead to more units being built at the expense of adequate infrastructure for the units. The other five field directors said that they have not seen tribes build NAHASDA homes with inadequate infrastructure, but three of these five acknowledged that NAHASDA homes could be stretching existing infrastructure facilities in certain communities. For example, one director told us that because NAHASDA homes are often built within existing housing developments, as tribes add homes to existing communities, the communities’ underlying sanitation infrastructure may increasingly be burdened with those homes. HUD officials we spoke with said that they also do not have the data necessary to measure the extent to which HUD-funded homes need updated infrastructure investment. However, according to IHS officials, HUD can access IHS’s sanitation deficiency database pursuant to a 2007 memorandum of understanding that specifically authorizes data sharing between IHS, HUD, and other agencies. Native American tribes generally have a positive view of NAHASDA, and most see it as an improvement over the housing programs previously available to them, in large part because of NAHASDA’s emphasis on self- determination for the tribes. However, small grantees that receive lesser grants reported facing challenges in building new housing and in trying to leverage their grant funds to secure additional funding for affordable housing activities. Reporting on NAHASDA accomplishments is currently limited primarily to building, acquiring, and rehabilitating housing units, despite the fact that many tribes use NAHASDA funds for other eligible purposes. Because of this limitation in reporting, HUD has not collected a full set of data on NAHASDA. As a result, Congress has not had a complete picture of the program’s accomplishments. However, the revisions HUD plans to make to the Indian Housing Plan (IHP) and Annual Performance Report (APR) should address some reporting limitations, which should help efforts to assess the impact of NAHASDA on low-income Native Americans. NAHASDA also has helped some tribes with infrastructure development, but infrastructure continues to be a pressing need for many tribes, particularly in the area of sanitation. HUD does not currently collect grantees’ assessments of their housing-related infrastructure needs or data on how they use grant funds to address those needs, and planned revisions to the IHP and APR do not address reporting on infrastructure. As a result, additional opportunities exist for HUD to collect such information, which would allow it to track grantees’ efforts to address a key need in their communities and would broaden the scope of accomplishment data that HUD can report to Congress. Furthermore, comprehensive data on tribes’ infrastructure needs as they pertain to sanitation facilities are already collected by IHS and are available to HUD under an interagency memorandum of understanding. If HUD were to obtain this data and share it with grantees, the data could help tribes identify any unmet sanitation needs that they might include in their reporting and address with their NAHASDA grants. To better assess the extent to which NAHASDA is meeting its objectives of providing safe and healthy homes and coordinating infrastructure with housing development for low-income Native Americans, we recommend that HUD’s Office of Native American Programs ensure that its revised Indian Housing Plan and Annual Performance Report: capture data on tribes’ infrastructure-related needs; capture tribes’ plans for addressing their identified infrastructure needs; measure the extent to which NAHASDA grantees are using IHBG funds and Title VI loan guarantees for housing-related infrastructure development; and assess the effectiveness of infrastructure development in meeting the needs of low-income Native Americans, such as by measuring the number of low-income Native Americans that have better access to drinking water or a safe heat source. To help grantees identify their existing sanitation infrastructure needs, we recommend that HUD provide them with sanitation deficiency data obtained from IHS on homes in the grantees’ service area—particularly for those homes that are statutorily precluded from receiving IHS-funded sanitation construction services. We provided a draft of this report to HUD for review and comment. HUD’s Deputy Assistant Secretary for Native American Programs provided written comments that are discussed below and presented in Appendix IV. HUD stated that our report is generally positive and would be a very useful document. HUD also requested that we change the report title to reflect only the generally positive view of the Indian Housing Block Grant (IHBG) program under NAHASDA. However, we thought it necessary to include language on an issue for which the report makes recommendations to HUD. HUD also agreed with our conclusions and recommendations, noting that while there have been improvements, our conclusion that there is still a significant need for adequate infrastructure to support Indian housing is accurate. Additionally, HUD stated, and we agree, that HUD and the Indian Health Service (IHS) should continue to work together to address these problems. As indicated in our conclusions and recommendations, we believe that HUD’s inclusion of infrastructure needs and investments in program reporting and its use of IHS data on sanitation deficiencies in Indian country should benefit Native American tribes participating in the IHBG program and capture additional program results for HUD and Congress. HUD noted that allowing tribes to use their IHBG funds to leverage IHS resources would improve their ability to address infrastructure deficiencies. In our report, we highlight that leveraging IHBG funds with funds from other sources has benefited tribes, and that many tribes view leveraging as a practical approach to adequately funding their communities’ affordable housing needs. We are sending copies of this report to the Secretary of Housing and Urban Development and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix V. Our objectives were to evaluate (1) how Native American Housing Assistance and Self-Determination Act of 1996 (NAHASDA) program funds have allowed Native American tribes to address their affordable housing needs; (2) how, if at all, NAHASDA has improved the process of providing Native American tribes with access to federal funds to meet their affordable housing needs; and (3) the extent to which NAHASDA funding has contributed to infrastructure improvements in Native American communities. To address all three objectives, we reviewed NAHASDA’s legislative history and the Department of Housing and Urban Development’s (HUD) policies and procedures for administering the program. We interviewed officials in HUD’s Office of Native American Programs (ONAP) headquarters and in all six regional ONAP offices. We reviewed previous congressional reports and testimonies, our previous reports, a 2007 Office of Management and Budget Program Assessment Rating Tool report on the Indian Housing Block Grant (IHBG) program, and a 2009 independent study of the IHBG program contracted by HUD. To obtain information on grantees’ experiences with the program, we developed a sample of 12 grantees—2 in each ONAP region—of various sizes. We reviewed HUD’s annual IHBG allocation reports for fiscal years 1998 through 2008, which include data on each tribe’s American Indian and Alaska Native population, reported enrollment, criteria that HUD uses to determine the IHBG allocation, and the grant amount. Though grantees may be individual tribes or their tribally designated housing entities (TDHE), HUD makes an allocation to each tribe. To help us evaluate the program’s effectiveness in meeting the affordable housing needs of tribes of various sizes, we further analyzed the fiscal year 2008 allocation report by ONAP region to determine which grantees in each region could be identified as large or small based on population and enrollment—or as receiving a large or small grant—when compared with other grantees in the same region. In our analysis, we found that small and large grantees, based on population and grant size, were not always similar among the six ONAP regions. Because tribal populations vary across regions and grant size is largely based on population factors, small and large designations within regions also were relative to other grantees in each region. For the purposes of our review, we generally considered annual grants less than $250,000 to be lesser grants and the grantees receiving those grants to be small grantees. We chose to interview a range of grantees and, for each region, we interviewed two grantees whose population and grant size varied widely (see table 10). In making a final sample selection, we solicited input from the relevant oversight ONAP office on grantee participation, performance, and accessibility. We interviewed grantees either onsite or by telephone. To obtain a wider range of grantee perspectives, we also administered a Web-based survey to all grantees that received funding in fiscal year 2008, obtaining a 66 percent response rate. Additionally, we met with officials from groups representing Native American housing interests: the National American Indian Housing Council (NAIHC), a nonprofit organization that represents the interests of American Indians, Alaska Natives, and Native Hawaiians in providing affordable housing; and Cherokee Freedmen representatives, who advocate for housing and other benefits for the Cherokee Freedmen. Appendix III contains a brief history of the Cherokee Freedmen and information pertaining to the Cherokee Nation’s provision of housing assistance to Cherokee Freedmen members. Finally, we interviewed officials at the Indian Health Service (IHS), Bureau of Indian Affairs (BIA), and U.S. Department of Agriculture (USDA) Rural Development because those agencies also provide assistance to Native American communities. We initially contacted 351 tribes or TDHEs who were recipients of 2008 NAHASDA grants. We sent initial notifications and the survey by e-mail to most grantees. A small number of grantees did not have e-mail accounts, and we contacted them by telephone and sent them the survey by fax. To encourage responses, we followed up with four e-mails that included a link to the survey. Additionally, to try and increase the response rate, we contacted those grantees who had not responded to the e-mailed survey by telephone. We also contacted some respondents by telephone to clarify unclear responses. We received responses from 232 grantees, or a 66 percent response rate. Grantees responding to the survey represented 413 of the 535 tribes (77 percent) nationwide that benefited from NAHASDA grants in 2008. To pretest the questionnaire, we conducted cognitive interviews and held debriefing sessions with five NAHASDA grantees; two pretests were conducted in person and three were conducted by telephone. Pretest participants were selected to represent a variety of grantee sizes (as represented by dollar amount of the grants); whether they represented a tribe, single-tribe TDHE, or umbrella TDHE; and geographic locations. We conducted these pretests to determine if the questions were burdensome or difficult to understand and if they measured what we intended. In addition, we met individually with officials from ONAP and NAIHC to obtain their comments on our questionnaire. On the basis of the feedback from the pretests and these other knowledgeable entities, we modified the questions as appropriate. Content coding of responses. We provided respondents with an opportunity to answer several open-ended questions. The responses to those questions were classified and coded for content by a GAO analyst, while a second analyst verified that the first analyst had coded the responses appropriately. Some comments were coded into more than one category since some respondents commented on more than one topic. As a result, the number of coded items is not equal to the number of respondents who provided comments. These comments cannot be generalized to our population of NAHASDA grantees. Nonsampling errors. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps at both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. These steps included (1) having survey specialists help develop the questionnaire, (2) pretesting the questionnaire with NAHASDA grantees, (3) using multiple reminders to encourage survey response, and (4) contacting respondents to follow up on obvious inconsistencies, errors, and incomplete answers. Nonresponse analysis. Because only 66 percent of the study population provided responses, bias from nonresponse may result. If the responses of those who did not respond would have differed from the responses of those who did on some survey questions, the numbers reported solely from those who did respond would be biased from excluding parts of the population with different characteristics or views. To limit this kind of error, we made multiple attempts to obtain the participation of as many NAHASDA grantees as possible. We performed an additional analysis to determine whether our survey respondents had characteristics that were significantly different from all grantees in the study population. To do this, we identified two grantee characteristics that were available for the entire study population—grantee location (region) and grant size. For these comparisons, we found little difference between the distribution of responses from the respondents and the actual population values. However, when comparing respondents to nonrespondents by grant size, we found that the survey respondents, on average, received much larger grants; the median grant size was $717,686 for those who responded and $451,222 for those who did not. This suggests that grantees receiving large grant amounts were more likely to participate in our survey than others. For example, more than two-thirds of grantees with grants larger than $250,000 participated in the survey while only about half of the smaller grantees participated. Respondents to the survey received about 80 percent of the grant amounts distributed in 2008. We performed computer analyses to identify inconsistencies in responses and other indications of error. In addition, an independent analyst verified that the computer programs used to analyze the data were written correctly. To evaluate how NAHASDA program funds have allowed Native American tribes to address their affordable housing needs, we reviewed NAHASDA legislation to identify housing activities that are eligible under the program. We reviewed HUD performance data on the NAHASDA program, including its fiscal year 2008 NAHASDA Report to Congress and cumulative data on affordable housing units grantees built, acquired, and rehabilitated using IHBG funds during fiscal years 2003 through 2008. We assessed the reliability of these data by (1) performing electronic testing of the data elements, (2) reviewing existing information about the data and the systems that produced them, and (3) interviewing agency officials knowledgeable about these data. We determined that these variables were sufficiently reliable for our reporting purposes. In addition to analyzing the 2008 IHBG allocation report, we reviewed data on pre-NAHASDA units that HUD factored into making 2008 grant allocations. We also reviewed the Indian Housing Plan (IHP) and Annual Performance Report (APR) forms that grantees use to report on planned and actual housing activities each year. Further, we requested and analyzed a sample of completed IHPs for 2005 through 2008 that grantees submitted in each ONAP region to determine the types of housing activities grantees were pursuing in recent years. We used the IHP information in part to develop our sample for grantee site visits and telephone interviews. In conducting those site visits and telephone interviews and in our Web-based survey, we asked grantees questions related to how they use IHBG funds to address their affordable housing needs, other sources of funding they use in combination with the IHBG program (leveraging), some of the challenges they experience with the IHBG program or with leveraging, and program reporting. Similarly, in our interviews with regional ONAP officials, we asked questions about grantees’ housing activities, leveraging, and program reporting. Specific to leveraging, we asked whether ONAP provides resources to assist grantees with identifying and accessing other potential funding opportunities to supplement their IHBG funds. Finally, we interviewed officials at IHS, BIA, and USDA about collaborating with HUD to provide housing and related services to Native American communities. To evaluate how, if at all, NAHASDA has improved the process of providing Native American tribes with access to federal funds to meet their affordable housing needs, we obtained perspectives from ONAP, grantees, and NAIHC on NAHASDA as it compares to the housing programs for which Native Americans were eligible under the U.S. Housing Act of 1937 (1937 Act). In our interviews with ONAP officials, grantees, and NAIHC representatives, we discussed various aspects of NAHASDA in comparison with the 1937 Act programs, including program structure and funding. In our Web-based survey, we also asked grantees to report on specific challenges they have experienced with the IHBG program, and we asked those with pre-NAHASDA experience to rate NAHASDA in comparison with pre-NAHASDA programs. To evaluate the extent to which NAHASDA funding has contributed to infrastructure improvements in Native American communities, we reviewed the NAHASDA legislation to identify the statutory goals of the program. We then analyzed HUD’s IHP and APR forms to assess the extent to which HUD tracked grantees’ infrastructure needs and measured their investments in housing-related infrastructure development. In conducting our grantee site visits and telephone interviews and in our Web-based survey, we discussed grantees’ use of NAHASDA funds to meet their infrastructure needs. We also discussed their use of IHBG funds for infrastructure development. In our survey, in particular, we asked grantees to describe their housing-related infrastructure needs, the extent to which they use IHBG funds to meet those needs, and any challenges they face. In our meetings with each of the six regional ONAP offices and with officials at ONAP headquarters, we asked similar questions. Finally, we asked IHS officials for their assessment of the sanitation infrastructure needs on Indian lands and we reviewed IHS’s sanitation deficiency data collection process and methodology. We conducted this performance audit in Alaska; Arizona; California; Colorado; Illinois; Michigan; Montana; Oklahoma; Utah; Washington; Washington, D.C.; and Wisconsin from January 2009 to February 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify potential steps that small grantees can take to maximize the impact of their Native American Housing Assistance and Self- Determination Act of 1996 (NAHASDA) grant, we asked all survey respondents to suggest best practices or effective strategies for grantees that receive lesser grants, which we defined as less than $250,000 per year (see table 11). Of the 168 distinct recommendations provided, respondents overwhelmingly mentioned using the NAHASDA grant in combination with other funding sources—also known as leveraging—to maximize the grant’s impact. Other recommendations focused on enhancing the administrative capacity of tribal housing departments. For example, some survey respondents wrote that tribes should make sure to hire staff with experience in grant writing and others recommended that small grantees pool their resources, for example by joining an umbrella tribally designated housing entity, to minimize administrative costs. As part of our outreach to tribal entities served by the Native American Housing Assistance and Self-Determination Act of 1996 (NAHASDA) program, we obtained information pertaining to the provision of federal housing assistance to Cherokee Freedmen. In a suit filed in federal district court, the Cherokee Freedmen claim to be “direct descendants of former slaves of the Cherokees, or free blacks who intermarried with Cherokees, who were made citizens of the Cherokee Nation in the Nineteenth Century.” The Cherokee Nation, a federally recognized Indian tribe headquartered in Oklahoma, has attempted to expel this particular group of its citizens over the past several years through both tribal legislation and constitutional amendment. The Cherokee Freedmen’s claim for tribal membership is based on an 1866 treaty between the Cherokee Nation and the federal government, under which two groups of people—former slaves of the Cherokee Nation and “all free colored persons”—either residing in Cherokee territory when the Civil War began or who returned within six months, and the descendants of such persons, were guaranteed “all rights of native Cherokees.” In response to the efforts to deny them citizenship rights, the Cherokee Freedmen have turned to litigation in both federal and tribal court, claiming that the action of the Cherokee Nation conflicts with the treaty and violates equal protection. The Cherokee Nation’s arguments are grounded on the premise that the authority of Indian tribes to define membership is inherent. Any attempt by the Cherokee Nation to act on its measures to disenroll the Cherokee Freedmen has been enjoined in tribal court while federal litigation proceeds. Figure 17 provides an overview of key events in the Cherokee Freedmen’s history since 1866. As part of our outreach to tribal entities served by NAHASDA, we obtained information pertaining to the provision of housing assistance to Cherokee Freedmen. We relied on statements from the Cherokee Nation’s housing department, representatives of the Cherokee Freedmen, and Department of Housing and Urban Development (HUD) officials for the following information: According to the Cherokee Nation’s housing department, the department is “color blind” and makes no distinction between Cherokee Freedmen and other enrolled members in providing housing benefits. Cherokee Freedmen representatives told us that many Cherokee Freedmen members’ enrollment applications have not been processed and many enrolled members have been unable to obtain housing and other benefits. HUD officials explained that because they had not received any complaints from Cherokee Freedmen regarding housing benefits as of February 1, 2010, HUD is not actively monitoring the Cherokee Nation’s compliance with the injunction in its provision of housing benefits to members. The officials said that they would pursue any such complaints through HUD’s program monitoring and enforcement procedures. In addition to the individual named above, Andy Finkel, Assistant Director; Bernice Benta; Juliann Gorse; John McGrail; Marc Molino; Luann Moy; Paul Revesz; and Jennifer Schwartz made key contributions to this report.
The Native American Housing Assistance and Self-Determination Act of 1996 (NAHASDA) changed how the Department of Housing and Urban Development (HUD) provides housing assistance to Native Americans. Congress created NAHASDA to recognize self-determination for tribes in addressing their low-income housing needs. In NAHASDA's 2008 reauthorization, Congress asked GAO to assess the program's effectiveness. This report discusses (1) how tribes have used NAHASDA funds, (2) how NAHASDA has improved the process of providing tribes with funds for housing, and (3) the extent to which NAHASDA has contributed to infrastructure improvements in tribal communities. GAO analyzed agency documentation, surveyed all tribes receiving grants in fiscal year 2008, conducted site visits with select tribes, and interviewed officials at HUD and other agencies. Native American tribes have used NAHASDA block grant funds to develop new housing and to provide other types of housing assistance to eligible members, but fewer small grantees have developed new housing. Out of 359 grantees in fiscal year 2008, 102 received less than $250,000, with 22 of those reporting that they had developed new housing over the life of their participation in the program. Smaller grantees often provide tenant-based rental assistance and other such services to members, but HUD neither tracks activities that are not unit-based (units built, acquired, or rehabilitated) nor reports those activities to Congress. However, HUD is revising its reporting to track more activities, which should help efforts to assess the impact of NAHASDA. Most grantees that we surveyed and interviewed view NAHASDA as effective, largely because it emphasizes tribal self-determination. Grantees feel the program has helped to improve housing conditions and increase access to affordable housing, but they reported that developing housing finance mechanisms and increasing economic development remain as challenges. Housing-related infrastructure development is an affordable housing activity under NAHASDA, but HUD does not collect grantees' infrastructure plans or measure their infrastructure investments. Indian Health Service (IHS) data show an acute need for sanitation-related infrastructure in Indian housing, and 85 percent of grantees responding to our survey reported that developing infrastructure, such as providing homes with access to drinking water, was a continuing need. According to IHS officials, HUD can access IHS data on sanitation deficiencies under a 2007 memorandum of understanding between the agencies. HUD could use this data to track grantees' efforts to address a key need in their communities and broaden the scope of accomplishment data it reports to Congress. This data could also help grantees identify any unmet sanitation needs they might address with their NAHASDA grants.
In conducting the 2010 Census, the Bureau encountered two sets of challenges: internal management challenges that affected the Bureau’s overall readiness and led us to designate the 2010 Census as a high-risk area, as well as external sociodemographic challenges such as more non- English speakers and people residing in makeshift and other nontraditional living arrangements. As shown in figure 1, the cost of enumerating each housing unit has escalated from around $16 in 1970, to around $98 in 2010, in constant 2010 dollars (an increase of over 500 percent). At the same time, the mail response rate—a key indicator of a cost-effective census—has declined from 78 percent in 1970 to 63 percent in 2010. In many ways, the Bureau has been investing substantially more resources each decade just to try and match the results of prior enumerations. Average cost per housing unit (in constant 2010 dollars) Beginning in 1990, we reported that rising costs, difficulties in securing public participation, and other long-standing challenges required a revised census methodology—a view that was shared by other stakeholders. Achieving acceptable results using these conventional methods has required an increasingly larger investment of fiscal resources, which in the coming years will likely become scarcer. Indeed, the 2010 Census required an unprecedented commitment of resources, including recruiting more than 3.8 million total applicants— roughly equivalent to the entire population of Oklahoma—for its temporary workforce; and rose in cost from an initial estimate of $11.3 billion in 2001 to around $13 billion. According to the Bureau, several factors were largely behind the escalating costs of the 2010 Census including (1) a flawed acquisition strategy, (2) the need to hire a large number of field staff to enumerate people who did not mail back their census forms, and (3) substantial investments in updating the Bureau’s address list just prior to the start of the enumeration. The results of prior enumerations underscore the fact that simply refining current methods—some of which have been in place for decades—will not bring about the reforms needed to control costs while maintaining accuracy given ongoing and newly emerging societal trends. Since 1970, the Bureau has used a similar approach to count the vast majority of the population. For example, the Bureau develops an address list of the nation’s housing units and mails census forms to each one for occupants to complete and send back. Over time, because of demographic and attitudinal trends, securing an acceptable response rate has become an increasing challenge. Our concerns about the rising cost and diminishing returns of the census are not new. In the mid-1990s, for example, we and others concluded that the established approach for taking the census had exhausted its potential for counting the population cost-effectively and that fundamental design changes were needed to reduce census costs and improve the quality of data collected. A fundamental reexamination of the nation’s approach to the census will require the Bureau to rethink its approach to planning, testing, implementing, monitoring, and evaluating the census, and addressing such questions as: Why was a certain program initiated? What was the intended goal? Have significant changes occurred that affect its purpose? Does it use prevailing leading practices? Our December 2010 report noted potential focus areas for such a reexamination. These include better leveraging of innovations in technology and social media to more fully engage census stakeholders and the general public on census issues; and reaching agreement on a set of criteria that could be used to weigh the trade-offs associated with the need for high levels of accuracy on the one hand, and the increasing cost of achieving that accuracy on the other hand. One of the areas that the Bureau would like to leverage for the 2020 Census is the use of an Internet response option. The Bureau provided the opportunity for respondents to complete the 2000 Census short forms on the Internet—protected by a 22-digit identification number. According to Bureau officials, for the 2000 Census, about 60,000 short forms were completed via the Internet. The Bureau originally planned to include the Internet in the 2010 Census, but then decided not to, because the benefits gained through processing less paper, as well as improvements to the quality of data, were outweighed by the cost of developing the Internet response option and the risks associated with the security of census data. To examine its use for the 2020 decennial census, the Bureau will need to review many of those same issues and address the following questions: To what extent could an Internet response option lower data collection costs for the Bureau? To what extent could an Internet response option increase the quality of data collected? To what extent does the use of an Internet response option pose a risk to the confidentiality of census data? Moreover, given that the research, development, and testing efforts for 2020 will play out over the decade-long census life-cycle, what is the optimal way to ensure continuity and accountability for an enterprise that takes years to complete and extends beyond the tenure of many elected political leaders? The Director of the Census Bureau can, in concept, provide a measure of continuity, but of the 11 census directors who have served since July 1969 (not including the current director), the average tenure was around 3 years, and only 1 director has served more than 5 years. Further, in the decade leading up to the 2010 Census, the Bureau was led by 4 different directors and several acting directors. The turnover in the Bureau’s chief executive officer position makes it difficult to develop and sustain efforts that foster change, produce results, mitigate risks, and control costs over the long term. The heads of a number of executive agencies serve fixed appointments, based on Presidential nomination and Senate confirmation, including the Director of the Office of Personnel Management (4 years), the Commissioner of Labor Statistics (4 years), and the Commissioner of Internal Revenue (5 years). We believe that the continuity resulting from a fixed-term appointment could provide the following benefits to the Bureau: Strategic vision. The director needs to build a long-term vision for the Bureau that extends beyond the current decennial census. Strategic planning, human-capital succession planning, and life-cycle cost estimates for the Bureau all span the decade. Sustaining stakeholder relationships. The director needs to continually expand and develop working relationships and partnerships with governmental, political, and other professional officials in both the public and private sectors to obtain their input, support, and participation in the Bureau’s activities. Accountability. The life-cycle cost for a decennial census spans a decade, and decisions made early in the decade about the next decennial census guide the research, investments, and tests carried out throughout the entire 10-year period. Institutionalizing accountability over an extended period may help long-term decennial initiatives provide meaningful and sustainable results. As noted earlier, a key indicator of a cost-effective census is the mail response rate, which is the percentage of all housing units in the mail-back universe, including those that are later found to be nonexistent or unoccupied. High response rates are essential because they save taxpayer dollars and ensure a more accurate enumeration. According to the Bureau, for every percentage point increase in mail response in 2010, the Bureau saved $85 million that would otherwise have been spent on in-person follow-up efforts. Also, according to the Bureau, it costs 42 cents to mail back each census form in a postage-paid envelope, compared with an average estimate of $57 for field activities necessary to enumerate each home in person. Moreover, mail returns tend to have better quality data, in part because as time goes on after Census Day (April 1), people move, or may have difficulty recalling who was residing with them. For the 2010 Census, the Bureau expected a response rate of 59 percent to 65 percent. The actual mail response rate on April 19 when the Bureau initially determined the universe of houses to visit for nonresponse follow- up (NRFU) was just over 63 percent, well within the Bureau’s range of estimates. Achieving this response rate was an important accomplishment given the nation’s increasing diversity. As illustrated in figure 2, the Bureau met its expected response rate in all but 11 states. The highest response rate (71.7 percent) was in Minnesota, while the lowest response rate (51 percent) was in Alaska. At the same time, response rates in all but 2 states—Hawaii and South Carolina—as well as the District of Columbia, declined anywhere from 0.8 to 8.2 percentage points when compared to 2000, thus underscoring the difficulty the Bureau will face in the future in trying to sustain response rates. Key factors aimed at improving the mail response rate included the mailing of an advance letter and a reminder postcard, and an aggressive marketing and outreach program. In addition, this was the first census the Bureau sent a second or “replacement” questionnaire to households. Replacement questionnaires were sent to around 25 million households in census tracts that had the lowest response rates in the 2000 Census, and 10 million replacement questionnaires were sent to nonresponding households in other census tracts that had low-to-moderate response rates in 2000. To determine if these and other census-taking activities were effective, the Bureau plans to complete over 70 studies covering such topics as marketing and publicity, field operations, privacy and confidentiality, and language barriers. Moreover, in July 2010, the Bureau developed a database for cataloging all recommendations from these 2010 studies, as well as recommendations from our office, the Department of Commerce Inspector General’s Office, and the National Academy of Sciences, among others. According to a Bureau official, this database will allow the Bureau to link 2010 recommendations to 2020 research and testing, in an attempt to ensure that all recommendations coming out of 2010 are incorporated into 2020 research. These studies of the 2010 Census are extremely important for informing decisions on the design of the 2020 Census. However, some will not be completed by fiscal year 2012, when the Bureau plans to start research and testing for the 2020 Census. Bureau officials said they will give priority to studies that align with the 2020 Census strategic plan. In moving forward, it will be important for the Bureau to complete 2010 Census studies and stay on track to ensure that study results, where appropriate, are incorporated into 2020 research. As such, until all studies from the 2010 Census are finished, the Bureau will not have a complete picture of what worked well, or know what improvements are needed for 2020. Moreover, in several of the programs we reviewed, assessments were not always focused on the value-added of a particular operation, such as the extent to which it reduced costs and/or enhanced data quality. This information would be useful for improving operations, identifying possible duplicative efforts, and identifying potential cost savings for 2020. As one illustration, a complete and accurate address list, along with precise maps are the fundamental building blocks of a successful census. If the Bureau’s address list, known as the Master Address File (MAF) and maps are inaccurate, people can be missed, counted more than once, or included in the wrong location. To build an accurate address list and maps, the Bureau conducted a number of operations throughout the decade, some of which were extremely labor-intensive. For example, the Bureau partnered with the U.S. Postal Service and other federal agencies; state, local, and tribal governments; local planning organizations; the private sector; and nongovernmental entities. Moreover, the Bureau employed thousands of temporary census workers to walk every street in the country to locate and verify places where people could live, in an operation called address canvassing. Three additional activities were aimed at properly identifying and locating dormitories, nursing homes, prisons, and other group living arrangements known as “group quarters.” In a 2009 testimony, we noted that with the cost of counting each housing unit growing at a staggering rate, it is important for the Bureau to determine which of its multiple MAF-building operations provide the best return on investment in terms of contributing to accuracy and coverage. A number of operations might be needed to help locate people residing in different types of living arrangements, as well as to ensure housing units missed in one operation get included in a subsequent operation. However, the extent to which each individual operation contributes to the overall accuracy of the MAF is uncertain. This in turn makes it difficult for the Bureau to fully assess the extent to which potential reforms such as reducing or consolidating the number of address-building operations, might affect the quality of the address list. As one example, while the Bureau plans study options for targeted address canvassing as an alternative to canvassing every street in the country, the Bureau’s evaluation plan does not specify whether the Bureau will look across MAF-building activities and compare how each individual operation contributes to the overall accuracy and completeness of the address list and at what cost. Leveraging such data as local response rates, census sociodemographic information, as well as other data sources and empirical evidence, might also help control costs and improve accuracy by providing information on ways the Bureau could more efficiently allocate its resources. For example, some neighborhoods might require a greater level of effort to achieve acceptable results, while in other areas, those same results might be accomplished with fewer resources. To the extent the Bureau targeted various activities during the 2010 Census, initial indications suggest that those efforts went well. For example, the Bureau developed job aids to address location-specific training challenges. In one example, partly in response to our recommendations, to help ensure the Bureau would develop an accurate address list in those areas affected by Hurricanes Katrina, Rita, and Ike, the Bureau developed supplemental training materials for natural disaster areas to help census workers identify less conventional places where people might be living such as homes marked for demolition, converted buses and recreational vehicles, and nonresidential space such as storage areas above restaurants. As another example, the Bureau budgeted around $297 million on paid media to raise awareness and encourage public participation in the census, about $57 million (24 percent) more than in 2000 in constant 2010 dollars. To determine where paid media efforts might have the greatest impact, the Bureau developed predictive models based on 2000 Census data and the evaluations of other efforts used for 2000. By better targeting paid media buys by area and message, the Bureau expected to more effectively reach those who have historically been the hardest to count. However, according to the Bureau, this modeling could have been more robust had the data from 2000 done a better job of isolating the impact of paid media from other components of the Bureau’s outreach efforts, among other factors. Simply put, the Bureau made important progress in using data to determine where to spend its resources. It will be important for the Bureau to expand on those efforts in 2020, as well as to develop information on the return on investment of key census operations. A key priority for the Bureau will be to fully address those areas that led us to designate the 2010 Census a high-risk program. The problems the Bureau encountered in managing its IT systems, developing reliable life- cycle cost estimates, and testing key operations under census-like conditions were cross-cutting in that they affected a number of different activities, and thus threatened the Bureau’s readiness for the census. The Bureau has taken steps to address these vulnerabilities. In the years ahead, it will be important for the Bureau to continue the progress it has made to date and ensure that any changes are fully integrated into its basic business practices. IT is critical to a successful census because it helps support the Bureau’s data collection, analysis, and dissemination activities. However, the Bureau has had long-standing difficulties with the development and acquisition of automated systems. For example, during the 2000 Census, the Bureau had to grapple with untimely and inaccurate management information, a lack of mature and effective software and systems development processes, inadequate testing of key systems, inadequate security controls, and an insufficient number of experienced staff to manage expensive and complex system projects. Both we and the Department of Commerce Inspector General made a series of recommendations to address these issues, and the Bureau took steps to implement them. Still, problems reemerged during the run-up to the 2010 Census. For example, while the Bureau planned to use automation and technology to improve the coverage, accuracy, and efficiency of the 2010 Census, in June 2005, we noted that the Bureau had not fully implemented key practices important to managing IT, including investment management, system development and management, and enterprise architecture management. As a result, we concluded that the Bureau’s IT investments were at increased risk of mismanagement, and were more likely to experience cost and schedule overruns and performance shortfalls. As development of the IT systems progressed, these problems were realized. For example, the Field Data Collection Automation program, which included the development of handheld computers to collect information for address canvassing and NRFU, experienced substantial schedule delays and cost increases. As a result, the Bureau later decided to abandon the planned use of handheld data-collection devices for NRFU and reverted to paper questionnaires. According to the Bureau, this change added between $2.2 and $3 billion to the total cost of the census. The Bureau developed a new automated system to manage the paper- based approach, but the system experienced outages, slow performance, and problems generating and maintaining timely progress reports. Workarounds ultimately enabled the Bureau to successfully implement NRFU. However, the Bureau was still limited in its ability to effectively monitor productivity or implement quality-assurance procedures as documented in its operational plans. Therefore, as the Bureau prepares for 2020, among other actions it will be important for it to continue to improve its ability to manage its IT investments. Leading up to the 2010 Census, we made numerous recommendations to the Bureau to improve its IT management procedures by implementing best practices in risk management, requirements development, and testing. The Bureau implemented many of our recommendations, but not our broader recommendation to institutionalize these practices at the organizational level. The challenges experienced by the Bureau in acquiring and developing IT systems during the 2010 Census further demonstrate the importance of establishing and enforcing a rigorous IT acquisition management policy Bureau-wide. In addition, it will be important for the Bureau to improve its ability to consistently perform key IT management practices, such as IT investment management, system development and management, and enterprise architecture management. The effective use of these practices can better ensure that future IT investments will be pursued in a way that optimizes mission performance. Accurate cost estimates are essential for a successful census because they help ensure that the Bureau has adequate funds and that Congress, the Administration, and the Bureau itself can have reliable information on which to base decisions. However, we noted in our 2008 report that the Bureau’s cost estimate for the 2010 Census lacked detailed documentation on data sources and significant assumptions, and was not comprehensive because it did not include all costs. We noted that the Bureau had insufficient policies and procedures, and inadequately trained staff for conducting high-quality cost estimation for the decennial census, and therefore recommended that the Bureau take a variety of steps to improve the credibility and accuracy of its cost estimates. Moreover, following best practices from our Cost Estimating and Assessment Guide, such as defining necessary resources and tasks, could have helped the Bureau generate more reliable cost estimates. Partly as a result of these issues, some operations had substantial variances between their initial cost estimates and their actual costs. For example, the Bureau initially estimated that NRFU would cost around $2.25 billion. However, by the end of the operation, the Bureau reported using approximately $1.59 billion, which was 29 percent lower than budgeted. At the same time, another operation—address canvassing—was around $88 million (25 percent) more than its initial budget of $356 million, according to a preliminary Bureau estimate. Moving forward, it will be important for the Bureau to ensure the reliability of the 2020 cost estimate, and the Bureau has already taken several actions in that regard. For example, based on recommendations from our June 2008 report, Bureau officials have stated that some of their budget staff have been trained and certified in cost estimation. The Bureau also has started using the Decennial Budget Integration Tool (DBiT). According to the Bureau, once it has completed entering all needed budget data, DBiT will consolidate budget information and enable the Bureau to better document its cost estimates. Further, as a part of its planning for 2020, Bureau officials said that they have developed and provided to the Office of Management and Budget (OMB) for its review a rough order of magnitude estimate for the 2020 Census—based on information at this early stage of 2020 planning. In addition, the Bureau plans to develop a range of full life-cycle cost estimates in fiscal year 2013. As noted in our cost estimating guide, a life- cycle cost estimate can be thought of as a “cradle to grave” approach to managing a program throughout its useful life. Life-cycle costing enhances decision making, especially in early planning and concept formulation. Therefore, as the Bureau develops its estimates for 2020, it will be important for the Bureau to identify all cost elements that pertain to the program from initial concept all the way through operations and support. Providing reliable cost estimates that are developed early in a project’s life-cycle and accompanied by sound justification will be important in order for Congress to make informed decisions about the levels at which to fund future decennial censuses. More specifically, greater fiscal transparency, before committing to a final design and a particular level of spending, could help inform deliberations on the extent to which (1) the cost of the census is reasonable, (2) trade-offs will need to be made with competing national priorities, and (3) additional dollars spent on the census yield better results. The census can be seen as a large, complex, yet inherently fragile machine comprised of thousands of moving parts, all of which must function in concert with one another in order to secure a cost-effective count. In short, while the census is under way, the tolerance for any breakdowns is quite small. Given this difficult operating environment, rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. As the Bureau geared up for 2010, we expressed our concern about the testing of key IT systems and other census-taking activities. For example, partly because of the performance problems with the handheld computers noted earlier, the Bureau decided not to include two census operations (NRFU and Vacant/Delete Check) in the full dress rehearsal for the census that was scheduled for 2008. In lieu of a full dress rehearsal, the Bureau tested individual components of the census in isolation. However, without a full dress rehearsal, the Bureau was unable to demonstrate that various enumeration activities could function under near-census-like conditions. Although the Bureau had performed many of these activities in previous censuses, some operations—such as mailing a second questionnaire to households that did not complete their census forms by a certain date, the removal of late mail returns, and fingerprinting hundreds of thousands of temporary census workers—were new for 2010 and introduced new operational risks. While the actual enumeration generally proceeded according to expectations, some operations, most notably the automated system that the Bureau developed to manage the paper-based NRFU operation noted earlier, were unable to function under operational loads in part because of a compressed testing schedule. Moving forward, as the Bureau refines and implements its testing plans, our past work on census testing has shown that it will be important for its strategy to include, but not be limited to, these key components of a sound study: clearly stated objectives with accompanying performance measures; research questions linked to test objectives and, as appropriate, a clear rationale for why sites were selected for field tests; a thoroughly documented data collection strategy; input from stakeholders and lessons learned considered in developing test objectives; and a data analysis plan including, as appropriate, methods for determining the extent to which specific activities contribute to controlling costs and enhancing quality. While the Bureau does not plan to conduct its first major census test until April 2014, as part of its research and testing for 2020 the Bureau plans to conduct 26 tests in support of six different design alternatives between fiscal years 2012 and 2014. These design alternatives include, for example, improving the existing 2010 design, using administrative records for nonresponse follow-up, or increasing the number of available response options such as the Internet or cell phones. Key elements of the Bureau’s research and testing strategy include: performing many small focused field tests in lieu of a few large field tests as was the case for the 2010 Census; setting up a virtual Local Census Office at Census Bureau headquarters to test new census methods; and using the American Community Survey—an ongoing Bureau survey of population and housing characteristics that is administered throughout the decade—as a vehicle to test specific census methods. These tests will be important for determining the feasibility of different design alternatives. We believe that given the number of tests and design alternatives that the Bureau plans to evaluate, it will be important to have a management structure in place for essential functions such as coordinating the tests; determining priorities; tracking the results; assessing their implications; weighing cost, accuracy, and other trade-offs; and ensuring that findings and recommendations are funneled to appropriate senior Bureau leadership for action. On the basis of our earlier work on high-performing organizations, fundamental reforms will mean ensuring that the Bureau’s organizational culture and structure, as well as its approach to strategic planning, human- capital management, internal collaboration, knowledge sharing, capital decision making, risk and change management, and other internal functions are aligned toward delivering more cost-effective outcomes. Indeed, some of the operational problems that occurred during the 2010 and prior censuses are symptomatic of deeper organizational issues. For example, the lack of staff skilled in cost estimation during the 2010 Census points to inadequate human-capital planning, while, as noted above, IT problems stemmed from not fully and consistently performing certain functions including IT investment management. Moreover, the Bureau’s own assessment of its organization found that it has a number of strengths including a culture that is committed to accuracy, precision, objectivity, and the overall mission of the census, as well as a workforce that understands decennial operations, procedures, and critical subject matter. At the same time, the Bureau’s assessment noted there were several areas for improvement. For example: the Bureau is an insular organization and does not always embrace open communications, transparency, innovation, and change; there were difficulties in drawing on assets and methods from across the agency; the organizational structure makes it difficult to oversee a large program and hampers accountability, succession planning, and personal development, among other factors; and staff with core skills and experience were lacking in such areas as management of large programs and projects; cost estimating; and sophisticated technology, systems, and development. While reforms will be needed along a number of fronts, our recent work on governmentwide strategic human capital management highlights some key steps—some of which the Bureau is already taking—to help ensure it identifies and closes current and emerging skill gaps to ensure the Bureau has the workforce needed to effectively and efficiently design and execute a successful census. These steps include: developing workforce plans that fully support the Bureau’s need for highly skilled talent, including defining the root causes of skills gaps, identifying effective solutions to any shortages, and taking action to implement those solutions; ensuring recruitment, hiring, and development strategies are responsive to changing applicant and workforce needs; and evaluating the performance of initiatives to address critical skill gaps and make appropriate adjustments. The Bureau, recognizing that it cannot afford to continue operating the way it does unless it fundamentally changes its method of doing business, has already taken some important first steps in addressing these questions, as well as other areas. For example, the Bureau is looking to reform certain aspects of its IT systems planning, in part to ensure that the technical infrastructure needed for 2020 will be tested many times before operations begin. The Bureau also is rebuilding its research directorate to lead early planning efforts, and has plans to assess and monitor the skills and competencies needed for the 2020 headcount. Further, the Bureau already has developed a strategic plan for 2020 and other related documents that, among other things, lay out the structure of planning efforts; outline the mission and goals for 2020; and describe the research and testing phase of the next enumeration. For example, to address major cost drivers such as field infrastructure, labor, and IT systems, as well as, the quality of data collected, the Bureau has identified the following four research tracks that focus on a(n): Expanded, Automated, and Tailored Response, which attempts to reduce paper, make it easier for the population to be counted, and tailor response options, such as the Internet. Reengineered Field Structure, including a Bureau-wide integrated IT infrastructure that, for example, will allow for a real-time, Web-based system to manage data collection in the field. Continual Address Frame Updating and Targeting, which, for example, expands the sources of data, to include commercial databases and administrative records, in the Master Address File so that a full address canvassing may not be required at the end of the decade. Using Administrative Records for Nonresponse, which includes a major study to determine to what extent administrative records can be used for nonrespondents. The Bureau’s early planning efforts are noteworthy given the Bureau’s long-standing challenges in this area. For example, in 1988, just prior to the 1990 Census, we noted that the Bureau’s past planning efforts generally started late, experienced delays, were incomplete, and failed to fully explore innovative approaches. Planning for the 2000 Census also had its shortcomings, including, as we noted in our 2004 report, a persistent lack of priority-setting, coupled with minimal research, testing, and evaluation documentation to promote informed and timely decision making. And, while the planning process for the 2010 Census was initially more rigorous than for past decennials, in 2004 we reported that the Bureau’s efforts lacked a substantial amount of supporting analysis, budgetary transparency, and other information, making it difficult for us, Congress, and other stakeholders to properly assess the feasibility of the Bureau’s design and the extent to which it could lead to greater cost- effectiveness compared to alternative approaches. As a result, in 2004, we recommended that the Bureau develop an operational plan for 2010 that consolidated budget, methodological, and other relevant information into a single, comprehensive document. The Bureau later developed specific performance targets and an integrated project schedule for 2010. However, the other elements we recommended were only issued piecemeal, if available at all, and were never provided in a single, comprehensive document. Because this information was critical for facilitating a thorough, independent review of the Bureau’s plans, as well as for demonstrating to Congress and other stakeholders that the Bureau could effectively design and manage operations and control costs, we believe that had it been available, it could have helped stave off, or at least reduce, the IT and other risks that confronted the Bureau as Census Day drew closer. The Bureau’s strategic plan for 2020, first issued in 2009, is a “living” document that will be updated as planning efforts progress. As the approach for 2020 takes shape, it will be important for the Bureau to avoid some of the problems it had in documenting the planning process for the 2010 Census, and pull all the planning elements together into a tactical plan or road map. This will help ensure the Bureau’s reform initiatives stay on track, do not lose momentum, and coalesce into a viable path toward a more cost-effective 2020 Census. On the basis of our work on planning for the 2010 Census, a road map for 2020 could include, but not be limited to, the following elements that could be updated on a regular basis: specific, measurable performance goals, how the Bureau’s efforts, procedures, and projects would contribute to those goals, and what performance measures would be used; descriptions of how the Bureau’s approaches to human-capital management, organizational structure, IT acquisitions, and other internal functions are aligned with the performance goals; an assessment of the risks associated with each significant decennial operation, including the interrelationships between the operations and a description of relevant mitigation plans; detailed milestone estimates for each significant decennial operation, including estimated testing dates, and justification for any changes to milestone estimates; detailed life-cycle cost estimates of the decennial census that are credible, comprehensive, accurate, and well-documented as stipulated by OMB and GAO guidance; and a detailed description of all significant contracts the Bureau plans to enter into and a risk management plan for those contracts. A comprehensive road map could generate several important benefits. For example, it could help ensure a measure of transparency and facilitate a more collaborative approach to planning the next census. Specifically, an operational plan could function as a template for 2020 giving stakeholders a common framework to assess and comment on the design of the census and its supporting infrastructure, the resources needed to execute the design, and the extent to which it could lead to greater cost-effectiveness compared to alternative approaches. Further, it could be used to monitor the Bureau’s progress in implementing its approach, and hold the agency accountable for results. Importantly, to the extent the plan—or aspects of it—are made available using social media tools, it could prompt greater and perhaps more constructive civic engagement on the census, by fostering an ongoing dialog involving individuals and communities of stakeholders throughout the decade. The Bureau goes to great lengths each decade to improve specific census- taking activities, but these incremental modifications have not kept pace with societal changes that make the population increasingly difficult to locate and count cost-effectively. The Bureau is fully aware of this problem and has wasted no time in turning the corner on the 2010 Census and launching the planning efforts needed for a more cost-effective enumeration come 2020. Many components are already in place, and a number of assessment and planning activities are already occurring. At the same time, the Bureau has also been responsive to the recommendations we have made in our past work. As these actions gather momentum in the years ahead, it will be important that they put the Bureau on a trajectory that boosts its capacity to conduct an accurate count, control costs, manage risks, and be more nimble in adapting to social, demographic, technological, and other changes that can be expected in the future. It will also be important for Congress to continue its strong oversight of the census to help ensure the progress the Bureau has made thus far continues going forward. We look forward to supporting the Subcommittee in its decision making and oversight of the decennial census. Chairman Carper, Ranking Member Brown, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you might have at this time. If you have any questions on matters discussed in this statement, please contact Robert Goldenkoff at (202) 512-2757 or by e-mail at [email protected]. Other key contributors to this testimony include Benjamin Crawford, Vijay D’Souza, Dewi Djunaidy, Ronald Fecso, Robert Gebhart, Richard Hung, Signora May, Lisa Pearson, Jonathan Ticehurst, and Timothy Wexler. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. 2010 Census: Data Collection Operations Were Generally Completed as Planned, but Long-standing Challenges Suggest Need for Fundamental Reforms. GAO-11-193. Washington, D.C.: December 14, 2010. 2010 Census: Key Efforts to Include Hard-to-Count Populations Went Generally as Planned; Improvements Could Make the Efforts More Effective for Next Census. GAO-11-45. Washington, D.C.: December 14, 2010. 2010 Census: Follow-up Should Reduce Coverage Errors, but Effects on Demographic Groups Need to Be Determined. GAO-11-154. Washington, D.C.: December 14, 2010. 2010 Census: Plans for Census Coverage Measurement Are on Track, but Additional Steps Will Improve Its Usefulness. GAO-10-324. Washington, D.C.: April 23, 2010. 2010 Census: Data Collection Is Under Way, but Reliability of Key Information Technology Systems Remains a Risk. GAO-10-567T. Washington, D.C.: March 25, 2010. 2010 Census: Key Enumeration Activities Are Moving Forward, but Information Technology Systems Remain a Concern. GAO-10-430T. Washington, D.C.: February 23, 2010. 2010 Census: Census Bureau Continues to Make Progress in Mitigating Risks to a Successful Enumeration, but Still Faces Various Challenges. GAO-10-132T. Washington, D.C.: October 7, 2009. 2010 Census: Census Bureau Should Take Action to Improve the Credibility and Accuracy of Its Cost Estimate for the Decennial Census. GAO-08-554. Washington, D.C.: June 16, 2008. 2010 Census: Census at Critical Juncture for Implementing Risk Reduction Strategies. GAO-08-659T. Washington, D.C.: April 9, 2008. Information Technology: Significant Problems of Critical Automation Program Contribute to Risks Facing 2010 Census. GAO-08-550T. Washington, D.C.: March 5, 2008. Information Technology: Census Bureau Needs to Improve Its Risk Management of Decennial Systems. GAO-08-259T. Washington, D.C.: December 11, 2007. 2010 Census: Census Bureau Has Improved the Local Update of Census Addresses Program, but Challenges Remain. GAO-07-736. Washington, D.C.: June 14, 2007. Information Technology Management: Census Bureau Has Implemented Many Key Practices, but Additional Actions Are Needed. GAO-05-661. Washington, D.C.: June 16, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. GAO-04-394G. Washington, D.C.: March 1, 2004. Comptroller General’s Forum, High-Performing Organizations: Metrics, Means, and Mechanisms for Achieving High Performance in the 21st Century Public Management Environment. GAO-04-343SP. Washington, D.C.: February 13, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO-04-37. Washington, D.C.: January 15, 2004. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO added the 2010 Census to its list of high-risk programs in 2008 in part because of (1) long-standing weaknesses in the Census Bureau's (Bureau) information technology (IT) acquisition and contract management function, (2) difficulties in developing reliable life-cycle cost estimates, and (3) key operations that were not tested under operational conditions. These issues jeopardized the Bureau's readiness for the count. Moreover, societal trends, such as concerns over privacy, have made a cost-effective census an increasingly difficult challenge. At about $13 billion, 2010 was the costliest U.S. Census in history. As requested, this testimony focuses on lessons learned from the 2010 Census, and initiatives that show promise for producing a more cost-effective population count in 2020. This testimony is based on completed and ongoing work, including an analysis of Bureau documents, interviews with Bureau officials, and field observations of census operations in urban and rural locations across the country. In February 2011, GAO removed the 2010 Census from its High-Risk List because the Bureau generally completed its peak enumeration activities and released congressional apportionment and redistricting data consistent with its operational plans. The Bureau improved its readiness for the census by strengthening its risk management activities, enhancing systems testing, and meeting regularly with executives from its parent agency, the Department of Commerce. Strong congressional oversight was also critical. Still, the 2010 Census required an unprecedented commitment of resources, and the cost of enumerating each housing unit has escalated from around $16 in 1970, to around $98 in 2010, in constant 2010 dollars. Based on the results of the 2010 and prior censuses, the following four early lessons learned could help secure a more cost-effective enumeration in 2020: 1. Reexamine the Nation's Approach to Taking the Census: The Bureau has used a similar approach to count most of the population since 1970. However, the approach has not kept pace with changes to society. Moving forward, it will be important for the Bureau to rethink its approach to planning, testing, implementing, and monitoring the census to address long-standing challenges. 2. Assess and Refine Existing Operations Focusing on Tailoring Them to Specific Locations and Population Groups: The Bureau plans to complete over 70 studies of the 2010 Census covering such topics as the Bureau's publicity efforts and field operations. As this research is completed, it will be important for it to assess the value-added of a particular operation in order for it to determine how best to allocate its resources for 2020. 3. Institutionalize Efforts to Address High-Risk Areas: Focus areas include incorporating best practices for IT acquisition management; developing reliable cost estimates; and ensuring key operations are fully tested, in part by developing clearly stated research objectives, a thoroughly documented data collection strategy, and methods for determining the extent to which specific activities contributed to controlling costs and enhancing quality. 4. Ensure that the Bureau's Management, Culture, and Business Practices Align with a Cost-Effective Enumeration: The Bureau will need to ensure that its organizational culture and structure, as well as its approach to strategic planning, human capital management, collaboration, and other internal functions are focused on delivering more cost-effective outcomes. The Bureau has launched an ambitious planning program for 2020. As these actions gain momentum, it will be important that they enhance the Bureau's capacity to control costs, ensure quality, and adapt to future technological and societal changes. GAO is not making new recommendations in this testimony, but past reports recommended that the Bureau strengthen its testing of key IT systems, better document and update its cost estimates, and develop an operational plan that integrates performance, budget, and other information. The Bureau generally agreed with GAO's findings and recommendations and is taking steps to implement them.
The Federal Property and Administrative Services Act of 1949, as amended (40 U.S.C. 471-486), places responsibility for the disposition of government real and personal property with the General Services Administration. That agency delegated disposal of DOD personal property to the Secretary of Defense, who in turn delegated it to the Defense Logistics Agency. The Defense Reutilization and Marketing Service, a component of the Defense Logistics Agency, carries out the disposal function. The complexity of DOD’s disposal process is characterized by the massive volume of excess property that is handled. In fiscal year 1997, DOD disposed of millions of items with a reported acquisition value (the amount originally paid for the items) of almost $22 billion. DOD, through the Office of the Deputy Under Secretary of Defense (Logistics), provides overall guidance for determining if parts should be disposed of. The military services and the Defense Logistics Agency have responsibility for determining if specific parts under their management are excess to their needs. Parts that are excess enter the disposal process and are sent to one of 154 worldwide Defense Reutilization and Marketing Offices (DRMO), or disposal yards. DRMO personnel inspect the parts upon receipt for condition; acquisition value; and special handling requirements, such as those for military-sensitive items. DRMOs have disposition priorities, consistent with legislative requirements, to make the excess parts available for reutilization within DOD or transfer to other federal agencies. Parts that remain are designated as surplus and can be donated to eligible entities, such as state and local governments among many others. After these priorities have been served, parts that remain may be sold to the general public as usable items or scrap. Figure 1 shows the process for disposing of parts. The military services assign a code the first time they buy spare parts for new aircraft, ships, land vehicles, and other military weapons and equipment to indicate whether the parts contain technology conferring a military capability. The military services are also responsible for reviewing and validating the assigned codes once every 5 years. Because of concerns about safeguarding military technology, DOD issued specific policies and procedures relating to the disposal of these parts. For parts that have military technology involving weapons, national security, or military advantages inherent in them, DOD requires the parts to be demilitarized so that the technology remains within DOD. Demilitarization makes the parts unfit for their originally intended purpose, either by partial or total destruction, before or as a condition of sale to the public. The term includes mutilation, cutting, crushing, scrapping, melting, burning, or alteration that destroys the military technology in the parts. DOD also has a program to identify and prevent parts with potential flight safety risks from being sold through the disposal process. In our 1994 report, we cited concerns from the Federal Aviation Administration and the Department of Transportation’s Inspector General that DOD aircraft parts, sold as scrap, illegally reentered civil aviation as usable. As a result, in July 1995 DOD initiated a departmentwide Flight Safety Critical Aircraft Parts program to identify and destroy surplus parts that could cause an aircraft to crash if the parts fail during a flight. The goal of the program is to prevent potentially dangerous parts from being sold by the DRMOs. Because DOD has been lax in following its existing disposal policies and procedures, it inadvertently sold or offered for sale parts with military technology intact. This situation occurred because (1) the military services assigned the wrong demilitarization codes to parts with military technology that needed to be protected, (2) an initiative intended to correct inaccurately assigned demilitarization codes did not ensure that data systems were updated with the corrected codes, and (3) the methods DOD used to demilitarize some parts did not adequately destroy the military technology contained in the parts. DOD has some actions underway to address these problems, but none have been fully implemented. Demilitarization codes are supposed to help the DRMOs determine which parts have military technology that should be destroyed before the parts are sold to the public as scrap or without the technology. However, DOD investigations and our analysis of 264 judgmentally selected items available for sale at 5 DRMOs showed that the military services often assigned the wrong demilitarization codes to parts with military technology that needed to be protected. When parts are miscoded, there is a high probability that those without military technology will be unnecessarily destroyed and those with military technology will be inadvertently sold. Such actions waste time and resources; increase costs; and, in the latter case, inadvertently make weapons and military technology available to the public. DOD has had problems with the accuracy of assigned demilitarization codes for many years and has initiated several projects to address these problems. For example, DOD (1) established liaisons in 1991 with federal investigative agencies to help find miscoded parts with military technology that are in the hands of the public, (2) assigned its own investigators, also in 1991, to monitor DRMO activities and identify miscoded parts, and (3) validated demilitarization codes from 1993 to 1995 for various weapon systems. Despite these initiatives, DOD documents show numerous instances of military parts and equipment with military technology intact that continue to be made available to the public. For example, in 1995 and 1997, DOD investigators identified hundreds of items that contained military technology sold by disposal offices. Many of these items were classified at the secret or confidential levels. The items included grenade launchers, bomb ejector arming units, radar circuit card assemblies, a guided missile launcher, key components of intercontinental ballistic missiles, weapon system technical data, electronic warfare equipment, sophisticated weapon fire control equipment, entire missiles and missile launchers, automatic weapons, guided and cluster bombs, coders, decoders, encoders, rocket launchers, secure communications equipment, and military night vision devices. The Defense Reutilization and Marketing Service also provided us with a listing of 1,684 different items it identified as miscoded. The items contained military technology but were incorrectly coded as having no military technology implications. We obtained disposal office sales history information for 881 of these items. There were 7,702 transactions involving the sale of these 881 items to the public between 1995 and 1997. These items included parts for weapons, guided missiles, and sensitive circuit card assemblies. Our analysis of 264 judgmentally selected items at 5 DRMOs showed that the wrong demilitarization codes were continuing to be assigned to some parts. The 264 items were recorded as either sold or available for sale to the public. We selected these items because they were parts for weapons and weapon systems but were shown in the disposal offices’ records as having no military technology that needed to be protected. We reviewed item characteristics and discussed the coding accuracy with disposal office personnel. Disposal office officials told us that, in their judgment, the demilitarization codes shown in the disposal offices records as having no military technology implications were likely inaccurate for 145 of the sample items. For selected items that appeared to be inaccurately coded, we contacted the item managers and equipment specialists and discussed the accuracy of the assigned demilitarization codes. The item managers and equipment specialists generally confirmed that the demilitarization codes for these items were inaccurate. For example, one of the items that was available for sale to the public at the DRMO in Kaiserslautern, Germany, was a waveguide assembly used in communications equipment. The item was coded as having no military technology implications. The equipment specialist confirmed that the assigned demilitarization code was incorrect because the part contained technology involving special military satellite communications. The specialist corrected the demilitarization code. According to officials at the five disposal offices visited, miscoded items account for 25 percent of their property management workload. They explained that, because of constant employee turnover, receiving personnel must be continually trained to screen for incorrect codes. Also, potentially miscoded items have to be reported to the Defense Reutilization and Marketing Service, and items with military technology requiring demilitarization must be stored apart from other items. Miscoding can have a significant impact on the disposal office workload. For example, the Kaiserslautern DRMO had a backlog of 29 semi-truck trailers full of material waiting to be processed for disposal. Officials stated that time spent on miscoded items significantly affected their ability to process the backlog and that more trailer loads of material were being received daily. DRMOs also inadvertently offered to sell some parts with military technology intact because DRMO personnel made errors in consolidating parts having military technology implications with parts not having these implications. This occurred when several single items were accumulated together and offered for sale as a batch lot. Although the batch lot was recorded as having no military technology implications, some individual parts in the lot contained protected military technology. For example, a batch lot of 42 weapons parts available for sale to the public at the Kaiserslautern DRMO contained 7 parts with military technology that should have been destroyed (see fig. 2). DRMO personnel subsequently removed the military technology items from the batch lot. DRMO supervisors stated that action would be taken to educate personnel on the required content of batch lots and that supervisory checks of batch lot contents would be made periodically to ensure that no parts with military technology are included in future batch lots. In our October 1997 report, we recommended that DOD improve the accuracy of assigned demilitarization codes by providing its personnel with guidance on selecting appropriate demilitarization codes that includes the specific details necessary to make appropriate decisions. DOD agreed with our recommendation and stated that it would work with the military services and the Defense Logistics Agency to determine the feasibility of departmentwide use of a worksheet that provides personnel with the specific details necessary to make prudent decisions on selecting the appropriate demilitarization codes. However, as of May 1998, DOD had not started using a worksheet departmentwide. Because DOD has had long-standing problems with the accuracy of assigned demilitarization codes, in 1993 the Defense Reutilization and Marketing Service developed a program for disposal offices to identify and prevent items with military technology from being sold. However, the disposal offices continued to sell parts with military technology intact because personnel often did not update their data systems with corrected codes. Under the program, disposal office personnel check the assigned demilitarization code. If the personnel believe an item has been miscoded, the item is “challenged.” They send a written report concerning the potential coding error to the Defense Reutilization and Marketing Service’s challenge program office where the item is then recorded in the program’s database as an open case. Challenge program personnel conduct research and discuss the assigned code with item managers and equipment specialists from the military services. After the correct code is determined, the item is recorded in the program’s database as a closed case. Challenge program personnel are then supposed to manually enter the correct code to the data system used by disposal offices to manage surplus parts, and the equipment specialists are supposed to enter the correct code to the data system used by the military services to catalog the characteristics of the parts. The disposal offices continued to sell parts with military technology intact because the challenge program was not effectively implemented. Both program personnel and equipment specialists did not update their data systems with the corrected codes after challenge program cases were closed. For example, disposal office personnel challenged the code for an automatic machine gun firing mechanism, which indicated that the part was appropriate for sale to the public. Research by challenge program personnel revealed that the part should be destroyed because it is the mechanism that enables the machine gun to fire automatically. However, Defense Reutilization and Marketing Service sales records show that 10 of the automatic machine gun mechanisms were later sold at a public auction because challenge program personnel and military service equipment specialists did not update their data systems with the correct code. To determine if this problem continued to exist, we asked the Defense Reutilization and Marketing Service to identify the number of instances in which either challenge program personnel or equipment specialists did not update their data systems with the corrected codes. Defense Reutilization and Marketing Service analyses showed that challenge program personnel did not enter corrected codes to the disposal data system for 2,920 of 35,981 (8 percent) closed cases. In November 1997, the Service corrected this problem by implementing a computer program that automatically updates the disposal data system when a case is closed in the challenge program. This automation eliminates the need for manual updates to the disposal data system. However, DOD has not corrected a larger problem involving cataloging data system updates that have to be made by the military services equipment specialists. The specialists had not updated the required changes for 26,278 of the 35,981 (73 percent) closed cases that the challenge program identified with incorrect codes. In June 1998, DOD officials stated that the equipment specialists had attempted to update the cataloging data system but the system did not accept the changes. They are researching the cause of the problem. We also asked the Defense Reutilization and Marketing Service to determine the number of instances in which the same item had different demilitarization codes in the two data systems. In December 1997, the Service compared the demilitarization codes in the cataloging data system used by the military services with the codes for the same items in the data system used by the disposal offices. The comparison identified 86,217 instances in which the same item had different demilitarization codes. We did not make an analysis to determine which codes were correct. However, the Defense Reutilization and Marketing Service official responsible for the challenge program believed the disposal office data system codes were more accurate than the codes in the cataloging data system because the latter system had not been updated with challenge program changes. The official said that only the military services can update the cataloging system with the correct demilitarization codes. DOD 4160.21-M-1, Defense Demilitarization Manual, provides instructions on how surplus parts with military technology should be destroyed. However, DRMOs do not always adequately destroy parts with military technology associated with weapons and weapon systems. In some cases, parts containing recoverable military technology have been sold to the public and foreign countries. Purchasers of demilitarized parts have put them back together or reverse engineered the technology and remanufactured the parts so they function as they did originally. According to DOD officials, the lack of adequate guidance to DRMOs on how to destroy the military technology inherent in some items has been a long-standing problem. For example, when DOD conducted a study in 1995 to determine if parts were being sold with military technology, it identified about 500 military weapons and weapon-related parts requiring demilitarization that were being resold by private companies. This listing included sensitive items such as F-18 guided missile launchers, technical data for Apache and Cobra helicopters, and F-15 inertial navigation systems. In some instances, the companies obtained the parts from disposal offices that did not adequately destroy the military technology before they sold the parts. Guidance to DRMOs on how to destroy the military technology inherent in some items continues to be inadequate. Officials at the five DRMOs told us that they still did not have adequate demilitarization instructions and, as a result, the demilitarization process is simply a trial and error process. For example, at the DRMO at Fort Hood, Texas, we noted a part used as an ammunition feeder for an automatic gun was coded for total destruction, but no specific guidance exists on how to destroy the part (see fig. 3). DRMO officials stated that their demilitarization process previously involved cutting the shaft of the feeder into two pieces. However, the officials discovered that, when the part is demilitarized in this manner, it can be welded back together and used as it was originally intended. The DRMO personnel then began cutting off all of the feeders’ appendages, which rendered the part unusable. At each of the DRMOs we visited, officials stated that losing military technology through the disposal process is a serious problem that they work on daily to prevent. DRMO officials said, however, that until the accuracy of assigned demilitarization codes and the destruction guidance are improved, disposal offices will continue to inadvertently sell some items with military technology that could be used by the public. To overcome these problems, DOD is considering a proposal to assign the responsibility for attaining accuracy in demilitarization coding to a single office in the Defense Reutilization and Marketing Service and is planning to provide the DRMOs with computerized images on how to destroy military technology in military parts. Historically, the military services have been responsible for assigning demilitarization codes to parts for new weapon systems and for ensuring that the assigned demilitarization codes are accurate throughout the life of the weapon system. According to DOD, over 3,000 personnel in dozens of locations are responsible for assigning demilitarization codes to approximately 12,000 new items entering the DOD supply system each month. These personnel are also responsible for validating code accuracy for items already in the system. In a 1997 report, DOD’s Inspector General recommended that DOD consolidate the responsibility to assign, challenge, and maintain demilitarization codes into a single office within the Defense Reutilization and Marketing Service. In its final comments to the Inspector General’s report, DOD stated that it planned to proceed with the consolidation. However, DOD has tasked the Defense Science Board to study the entire DOD demilitarization program and plans to use the results of this study, expected later in 1998, in deciding whether to implement the consolidation. DOD also is developing a system to provide DRMOs with computerized images on how to destroy military technology in military parts. Defense Logistics Agency officials said that the imaging system will include instructions, illustrations, and destruction techniques on over 100,000 different parts. The officials said that the system will not need to include images for all parts that contain military technology because a destruction technique for a specific part in a weapon system can be used for all similar parts in other weapon systems. According to Defense Logistics Agency officials, the imaging system could be available to the DRMOs by late 1998 via the Internet. The officials also stated that the success of this system will depend on whether the military services provide the required instructions, illustrations, and techniques on how to demilitarize parts and whether this information is kept up to date. The Defense Reutilization and Marketing Service also has started two pilot projects to centralize the demilitarization process at fewer DRMO locations. The Service expects the centralized sites to destroy military technology completely and consistently in accordance with imaging system instructions. In July 1995, DOD began departmentwide implementation of a Flight Safety Critical Aircraft Parts program that included six major initiatives to address concerns about aircraft parts with safety risks being sold to the public. However, DOD is making slow and uneven progress in implementing these initiatives. DOD has not set timelines for implementing the program. Further, none of the DOD components have fully implemented all of the initiatives. As a result, DOD’s disposal offices continue to sell potentially dangerous flight safety critical aircraft parts to the public. Recognizing the potential danger in having military aircraft parts with flight safety risks sold through the disposal process and then being reused on commercial and defense aircraft, DOD started to develop a flight safety program in 1994. Prior to that time, DOD had been selling parts with potential flight safety risks. In some instances, the potentially dangerous parts reentered defense and civil aviation and may have been reused on aircraft. According to the Department of Transportation’s Inspector General, for example, a parts distributor misrepresented severely worn aircraft parts as usable and sold them to a civil aviation industry customer for reuse. The distributor bought worn out scrap military jet aircraft engine combustion liner assemblies, attempted to refurbish the assemblies by welding the cracks and in other ways making the assemblies appear serviceable, and modified the assemblies so that they would fit the civil aviation version of the jet engine. In May 1994, DOD formed a team that consisted of representatives from the Office of the Deputy Under Secretary of Defense (Logistics), the military services, the Defense Logistics Agency, the Federal Aviation Administration, the General Services Administration, and the Coast Guard. The team’s mission was to develop a departmentwide program to identify and prevent parts with potential flight safety risks from being sold intact through disposal offices. DOD defines a flight safety critical aircraft part as any part, assembly, or installation containing a critical characteristic whose failure, malfunction, or absence could cause a catastrophic failure resulting in loss or serious damage to the aircraft, or an uncommanded engine shutdown resulting in an unsafe condition. In May 1995, the team identified six initiatives for the military services and the Defense Logistics Agency to follow when implementing the flight safety program. The program initiatives were to standardize and incorporate the definition for items with flight safety implications into regulations and directives and make procedural changes as necessary; identify parts considered flight safety critical; code items with flight safety implications in provisioning, cataloging and supply data systems and records, designating that special handling is required when the item is sent to disposal; maintain historical documentation on all flight safety items; require that historical documentation accompany parts sent to disposal offices and that flight safety items without historical documentation be destroyed before disposal; and require parts manufacturers to provide an Airworthiness Approval Tag for all flight safety items delivered to DOD that have both civil and military aviation applications and develop procedures for providing the Airworthiness Approval Tag to the disposal offices when such flight safety items are no longer needed by DOD. Under the program, parts with flight safety implications must either be accompanied by paperwork showing that the parts are safe to use or the parts must be destroyed. DOD began departmentwide implementation of the flight safety program in July 1995. The DOD directive initiating the flight safety program did not establish milestones and priorities for accomplishing the program initiatives. DOD gave the responsibility for setting program timetables and priorities to the military services and the Defense Logistics Agency. However, our review of documents and our discussions with officials from DOD and its components showed that the components have not aggressively pursued program implementation. As a result, after 3 years, none of the DOD components have fully implemented all of the initiatives, but some have made greater progress than others. The varying progress is illustrated by two of the key initiatives discussed below. According to DOD, the military services must review the flight safety characteristics of tens of thousands of aircraft parts and determining which parts are flight safety critical is difficult. Table 1 shows that the Army has made the most progress and the Navy has made the least in identifying parts for inclusion in the flight safety program. Flight safety critical aircraft parts not identified may be sold through DOD’s disposal system and reenter civil or defense aviation as usable. As shown in table 1, the Army identified 4,549 items with flight safety implications for inclusion in the program. The Army also identified 730 nonrepairable parts used on Army aircraft, but managed by the Defense Logistics Agency, as having flight safety implications. The Army is continuing its efforts to identify additional parts with flight safety implications. The Air Force identified 878 mostly repairable engine components as having flight safety implications. The Air Force also identified 87 nonrepairable parts used on Air Force aircraft engines, but managed by the Defense Logistics Agency, as having flight safety implications. The Air Force has not developed any time frames for identifying airframe components and other nonrepairable parts with flight safety implications. The Navy has made the slowest progress among the military services in identifying flight safety parts. The Navy official first assigned responsibility for implementing the flight safety program initiatives said that, because of higher priority work, the Navy initially could not allocate resources to the program. However, in late 1997, the Navy started identifying aircraft parts with flight safety implications. Defense Logistics Agency officials stated that they do not have the engineering expertise to assess flight safety implications of parts. This responsibility rests with the military services, which have identified 817 flight safety parts managed by the Agency. One of the initiatives involves coding items with flight safety implications in provisioning, cataloging, and supply data systems and records. The code designates that special handling is required when the item is sent to disposal. Each of the DOD components has to make data system changes to accommodate flight safety identifier codes. The Air Force’s data systems, in their current configuration, do not have the data fields needed to recognize the flight safety identifiers. However, the Air Force anticipates that all of the system changes necessary to implement the flight safety identifiers will be completed in November 1998. As an interim measure, the Air Force is separately tracking its flight safety items to ensure that documentation showing whether the parts are safe to use accompanies the items when they are processed at the DRMOs. The Army also has to change its data system to include flight safety identifier codes. This change is needed because the Army’s systems do not have the data fields in place to include identifiers for parts with flight safety implications. The Army plans to install a new automated system that will include the identifier codes, but it has not projected a completion date. As an interim measure, the Army is using a demilitarization code to identify these items. DRMO representatives said that using this code requires the disposal offices to call the item manager for disposition instructions, which is extremely time-consuming and confusing because demilitarization codes should be used only to identify the military technology inherent in the part and not whether the part has flight safety implications. An Army official responsible for the flight safety program stated that identifying flight safety items in this manner causes additional work but is warranted. Similar to the other DOD components, the Navy’s and the Defense Logistics Agency’s supply data systems do not have the data fields to include the identifier codes for items with flight safety implications. The Navy is in the process of revising the data fields to accommodate flight safety codes and expects the changes to be completed in late 1998. The Defense Logistics Agency expected to correct this problem in June 1998. DOD’s slow progress in implementing flight safety program initiatives results in the continuing sale of potentially dangerous flight safety critical aircraft parts through the disposal system. In addition, some of the parts that the military services identified as having flight safety implications were sold through DOD’s disposal system without required paperwork showing that the parts were safe to use. Disposal office sales information from October 1994 to March 1998 for parts identified by the Air Force as having flight safety implications shows the disposal offices sold 76,525 of these parts to the public without the appropriate paperwork. For example, the Air Force identified a compressor vane used on F-15 and F-16 aircraft engines as a flight safety part. According to the Air Force engineer responsible for the engines, if the compressor vane breaks during a flight, its metal fragments would damage the engine and could cause the aircraft to crash. Disposal office records show that, on March 6, 1996, the San Antonio DRMO sold 10,101 of these compressor vanes at a public auction without knowing whether the parts were safe to use. Also, in February 1998, the Kaiserslautern DRMO was offering for sale three tail rotor control assemblies used on the AH-1 Cobra helicopter (see fig. 4). If this part were to fail, the aircraft would spin uncontrollably and crash. The parts were not accompanied by required documentation stating it was safe to reuse them. DRMO officials said that they had not received any notification from the Army that this item had flight safety implications. Sales history information showed that this same part, described as being in severely worn condition, was sold in July 1996 by the DRMO in Columbus, Ohio, without any assurance that the part was safe to reuse. While DOD recognizes the dangers associated with selling surplus parts with military technology to the public and has taken certain actions to address the problem, DOD’s disposal offices have inadvertently sold surplus parts with military technology intact. These sales occurred for three reasons. First, the military services assigned the wrong demilitarization codes to the parts. Because guidance was inadequate, codes assigned to parts with military technology incorrectly indicated that the parts did not contain the technology. DOD has been considering ways to address this situation but has not yet reached a final decision. Second, an initiative intended to correct inaccurately assigned demilitarization codes did not ensure that data systems were updated with the corrected codes. As a result, disposal offices continued to sell parts with military technology intact after the codes for the parts were determined to be inaccurately assigned. Personnel responsible for correcting the inaccurately assigned codes did not always update their data systems with the corrected codes. Third, the methods that the disposal offices used to demilitarize some parts did not adequately destroy the military technology contained in the parts. Guidance to disposal offices on how to destroy the military technology inherent in some items was not adequate. DOD and its components have not aggressively pursued implementation of initiatives to prevent the sale of potentially dangerous flight safety critical aircraft parts through the disposal system. DOD and the components have not set timelines for implementing the flight safety program. Also, none of the components have fully implemented all of the program initiatives, but some have made greater progress than others. For example, at the time our fieldwork was completed, the Army had identified over 4,500 aircraft parts with flight safety implications, whereas the Navy had not identified any aircraft parts with these implications. DOD plans to increase its interaction and involvement in the program, but the military services and the Defense Logistics Agency continue to have problems accomplishing flight safety program initiatives. We recommend that the Secretary of Defense take the following actions to prevent the sale of parts with military technology and flight safety implications. Develop an action plan with specific milestones for addressing the problem of inaccurately assigned demilitarization codes. In developing the plan, consider (1) the recommendations of the DOD Inspector General and the Defense Science Board, (2) our previous recommendation to provide guidance on selecting appropriate codes, and (3) procedures to ensure that items listed in different data systems have the same demilitarization code in each system. Establish milestones for completing the imaging system that will provide guidance on how to destroy the military technology inherent in items. Establish milestones for fully implementing the Flight Safety Critical Aircraft Parts Program initiatives and institute requirements for the Secretaries of the Army, the Air Force, and the Navy and the Director of the Defense Logistics Agency to periodically report on the progress being made. In commenting on a draft of this report, DOD agreed with our recommendations but expressed concerns that (1) the report does not fully reflect the progress DOD continues to make in the reported areas and (2) some of the statements in the report are based on information requested of DOD personnel not in a position to provide such information. DOD’s comments are included as appendix II. With regard to our recommendation for developing an action plan to address inaccurately assigned demilitarization codes, DOD stated that an action plan addressing improvements needed in the demilitarization program will be developed 6 months after publication of the Defense Science Board’s final report, which is expected in the summer of 1998. DOD further stated that the action plan will incorporate milestones for completing the imaging system that will provide guidance on how to destroy the military technology inherent in items. Regarding our recommendation to establish milestones for complete implementation of the flight safety program, DOD stated that new milestones for fully implementing the program will be established no later than October 1998. Also, DOD stated that the military services and the Defense Logistics Agency will continue to report progress toward full implementation of the flight safety program on a quarterly basis to the Office of the Secretary of Defense. Regarding DOD’s two concerns, our draft report recognized that DOD has initiated several projects to address problems with the demilitarization and flight safety programs. However, we modified the final report to further recognize this in our results in brief and conclusions. With regard to DOD’s comment on information sources, DOD was referring to our discussions of demilitarization coding accuracy with disposal office and Defense Reutilization and Marketing Service personnel. DOD stated that the personnel are not technically qualified to make decisions on coding accuracy and that it is the equipment specialists who are responsible for assigning demilitarization codes. Our report notes that our analysis includes items that had been challenged and closed out after equipment specialists had determined that they had been miscoded. Also, as stated in the report, we judgmentally selected items for review to determine if they had been coded correctly. For those we identified as miscoded, we selectively confirmed our analysis with equipment specialists. Therefore, these steps provide us with confidence that our findings are adequately supported. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director, Defense Logistics Agency; and the Director, Office of Management and Budget. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix III. To determine the Department of Defense’s (DOD) policies and practices for destroying military parts during the disposal process, we met with officials and performed work at the Office of the Deputy Under Secretary of Defense (Logistics), Washington, D.C.; Army, Navy, and Air Force Headquarters, Washington, D.C.; the Defense Logistics Agency, Fort Belvoir, Virginia; the Defense Reutilization and Marketing Service, Battle Creek, Michigan; and the DOD Inspector General, Washington, D.C., and Columbus, Ohio. We also reviewed policies, procedures, disposal and transaction histories, and related records obtained from the Defense Reutilization and Marketing Offices (DRMO) and item managers, and documented disposal practices. We interviewed policy officials, DRMO personnel, item managers, and equipment specialists. To obtain information on how surplus parts with military technology and flight safety implications are received and processed for sale, we performed work at five DRMOs, located in San Antonio and Killeen (Fort Hood), Texas, and in Germersheim, Kaiserslautern, and Seckenheim, Germany. We selected these locations because, according to DOD records, they sold large volumes of parts and equipment with military technology implications. We also collected information from item managers, equipment specialists, and policy officials at the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the San Antonio Air Logistics Center, Kelly Air Force Base, Texas; the Army’s Aviation and Missile Command, Huntsville, Alabama; the Naval Inventory Control Point, Philadelphia, Pennsylvania; and the Defense Supply Centers, Columbus, Ohio, and Richmond, Virginia. To determine the adequacy of DOD’s policies and procedures to identify and destroy parts with military technology, we discussed procedures, problems, and challenges with officials from the Defense Logistics Agency and the Defense Reutilization and Marketing Service; obtained data and Defense Reutilization and Marketing Service analyses showing instances in which parts got into the wrong hands through the disposal process and had different demilitarization codes assigned to the same item in different data systems; and judgmentally selected 48 items at the Texas DRMOs and 216 items at the Germany DRMOs involving the accuracy of assigned demilitarization codes. We selected these items for review because they were parts for weapons and weapon systems but at the time of selection were coded in the DRMO’s records as having no military technology content. We compared the assigned codes with the codes available in DOD’s Demilitarization Manual and discussed the correct codes and military technology implications of the items with disposal office personnel, item managers, and equipment specialists. For historical perspective and illustrations of past problems, we reviewed the results of prior DOD internal studies and DOD Inspector General reports. We also used documentation and computer data obtained during our prior work on disposal operations. We used the same computer programs, reports, records, and statistics that DOD, the military services, the Defense Logistics Agency, and the Defense Reutilization and Marketing Service use to manage excess and surplus inventories, make decisions, and determine the correct demilitarization codes. We did not independently determine the reliability of all of these sources. However, as stated above, we did assess the accuracy of the demilitarization codes by comparing the codes assigned to the same item in different data systems and by comparing assigned codes to the codes available in the Demilitarization Manual. To determine whether parts requiring demilitarization were being adequately destroyed, we reviewed available guidance, interviewed demilitarization personnel at the five DRMOs, and observed items being destroyed. To determine the status of DOD’s flight safety program, we identified DOD’s program initiatives and documented the military services and the Defense Logistics Agency’s progress in implementing the program initiatives. We reviewed the policies, procedures, and related records of the military services and the Defense Logistics Agency and held discussions and performed work at the Office of the Deputy Under Secretary of Defense (Logistics). We obtained sales history information from the Defense Reutilization and Marketing Service to determine if some of the parts that the military services have identified with flight safety implications were sold through DOD’s disposal system without any paperwork showing that the parts were safe to use. To further determine whether the parts identified by the military services as having flight safety implications are being sold through the disposal process, we obtained a listing of the flight safety items identified by the Air Force and the Army. We compared these items with the listing of parts being offered for sale by the Kaiserslautern DRMO. We interviewed DRMO personnel to determine whether they were aware that the items we identified had flight safety implications. We performed our review between November 1997 and May 1998 in accordance with generally accepted government auditing standards. Roger Tomlinson Jackie Kriethe Bonnie Carter Frederick Lyles The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) disposal process for surplus parts with both military technology and flight safety risks, focusing on DOD's efforts to: (1) identify and destroy parts with military technology; and (2) implement a flight safety program to prevent aircraft parts with potential flight safety risks from being sold through the disposal process. GAO noted that: (1) while DOD recognizes the dangers associated with selling surplus parts with military technology to the public and has taken certain actions to address the problem, DOD's disposal offices have inadvertently sold surplus parts with military technology intact; (2) these sales occurred for three reasons; (3) the military services assigned the wrong demilitarization codes to the parts; (4) because guidance was inadequate, codes assigned to parts with military technology incorrectly indicated that the parts did not contain the technology; (5) DOD has been considering ways to address this situation but has not yet reached a final decision; (6) an initiative intended to correct inaccurately assigned demilitarization codes did not ensure that data systems were updated with the corrected codes; (7) as a result, disposal offices continued to sell parts with military technology intact after the codes for the parts were determined to be inaccurately assigned; (8) personnel responsible for correcting the inaccurately assigned codes did not always update their data systems with the corrected codes; (9) the methods that the disposal offices used to demilitarize some parts did not adequately destroy the military technology contained in the parts; (10) guidance to disposal offices on how to destroy the military technology inherent in some items was not adequate; (11) DOD and its components have not aggressively pursued implementation of initiatives to prevent the sale of potentially dangerous flight safety critical aircraft parts through the disposal system; (12) DOD and the components have not set timelines for implementing the flight safety program; (13) also, none of the components have fully implemented all of the program initiatives, but some have made greater progress than others; (14) for example, at the time GAO's fieldwork was completed, the Army had identified over 4,500 aircraft parts with flight safety implications, whereas the Navy had not identified any aircraft parts with these implications; and (15) DOD plans to increase its interaction and involvement in the program, but the military services and the Defense Logistics Agency continue to have problems accomplishing flight safety program initiatives.
Information security is a critical consideration for any agency that depends on information systems and computer networks to carry out its mission and is especially important for a government corporation such as FDIC, which has responsibilities to oversee the financial institutions that are entrusted with safeguarding the public’s money. While the use of interconnected electronic information systems allows the corporation to accomplish its mission more quickly and effectively, this also exposes FDIC’s information to threats from sources internal and external to the agency. Internal threats include errors, as well as fraudulent or malevolent acts by employees or contractors working within the agency. External threats include the ever-growing number of cyber-based attacks that can come from a variety of sources such as hackers, criminals, foreign nations, terrorists, and other adversarial groups. Potential cyber attackers have a variety of techniques at their disposal, which can vastly enhance the reach and impact of their actions. For example, cyber attackers do not need to be physically close to their targets, their attacks can easily cross state and national borders, and they can preserve their anonymity. Additionally, advanced persistent threats— where an adversary that possesses sophisticated levels of expertise and significant resources can attack using physical and cyber methods to achieve its objectives — pose increasing risks. Further, the interconnectivity among information systems presents increasing opportunities for such attacks. Indeed, reports of security incidents from federal agencies are on the rise. Specifically, the number of incidents reported by federal agencies to the United States Computer Emergency Readiness Team (US-CERT) has increased dramatically in recent years: from 5,503 incidents reported in fiscal year 2006 to 67,168 incidents in fiscal year 2014. Compounding the growing number and types of threats are the deficiencies in security controls on the information systems at federal agencies, which have resulted in vulnerabilities in both financial and nonfinancial systems and information. These deficiencies continue to place assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, and critical operations at risk of disruption. Accordingly, we have designated information security as a government-wide high-risk area since 1997, a designation that remains in force today. Federal law and guidance specify requirements for protecting federal information and information systems. The Federal Information Security Management Act of 2002 (FISMA) provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. To accomplish this, FISMA requires each agency to develop, document, and implement an agency-wide information security program to provide information security for the information and systems that support its operations and assets, using a risk-based approach to information security management. Such a program includes assessing risk; developing and implementing cost-effective security plans, policies, and procedures; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. FISMA also assigned to the National Institute of Standards and Technology (NIST) the responsibility for developing standards and guidelines that include minimum information security requirements. To this end, NIST has issued several publications to provide guidance for agencies in implementing an information security program. For example, NIST Federal Information Processing Standard (FIPS) 199 provides requirements on how agencies should categorize their information and information system(s) and NIST special publication (SP) 800-53guidance to agencies on the selection and implementation of information security and privacy controls for systems. FDIC was created by Congress to maintain the stability of and public confidence in the nation’s financial system by insuring deposits, examining and supervising financial institutions, and resolving troubled institutions. FDIC is an independent agency of the federal government, which Congress created in 1933 in response to the thousands of bank failures that had occurred throughout the late 1920s and early 1930s. FDIC insures deposits in banks and thrift institutions for at least $250,000; identifies, monitors, and addresses risks to the deposit insurance funds; and limits the effect on the economy and the financial system when a bank or thrift institution fails. FDIC administers two funds in carrying out its mission: The Deposit Insurance Fund (DIF) has the primary purposes of (1) insuring the deposits and protecting the depositors of banks and savings associations (insured depository institutions) and (2) resolving failed insured depository institutions in a manner that will result in the least possible cost to the fund. In cooperation with other federal and state agencies, FDIC promotes the safety and soundness of insured depository institutions by identifying, monitoring, and addressing risks to the DIF. The Federal Savings and Loan Insurance Corporation Resolution Fund (FRF) is responsible for the sale of the remaining assets and the satisfaction of the liabilities associated with the former Federal Savings and Loan Insurance Corporation and the former Resolution Trust Corporation. FDIC maintains the DIF and the FRF separately to support their respective functions. FDIC relies extensively on computerized systems to support its mission, including financial operations, and to store the sensitive information that it collects. The corporation uses local and wide area networks to interconnect its systems and a layered approach to security defense. To support its financial management functions, FDIC uses a corporate-wide system that functions as a unified set of financial and payroll systems that are managed and operated in an integrated fashion; a system to calculate and collect FDIC deposit insurance premiums and Financing Corporationinsured financial institutions; bond principal and interest amounts from a web-based application that provides full functionality to support franchise marketing, asset marketing, and asset management; an application and web portal to provide acquiring institutions with a secure method for submitting required data files to FDIC; computer programs used to derive the corporation’s estimate of losses from shared loss agreements; a system to request access to and receive permission for the computer applications and resources available to its employees, contractors, and other authorized personnel; and a primary receivership and subsidiary financial processing and reporting system. Under FISMA, the Chairman of FDIC is responsible for, among other things, providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and delegating to the corporation’s Chief Information Officer the authority to ensure compliance with the requirements imposed on the agency under FISMA. The Chief Information Officer is responsible for developing and maintaining a corporate-wide information security program and for developing and maintaining information security policies, procedures, and control techniques that address all applicable requirements. The Chief Information Officer also serves as the authorizing official with the authority to approve the operation of the information systems at an acceptable level of risk to the corporation. The Chief Information Security Officer reports to the Chief Information Officer and serves as the Chief Information Officer’s designated representative. The Chief Information Security Officer is responsible for (1) the overall support of assessment and authorization activities; (2) the development, coordination, and implementation of FDIC’s security policy; and (3) the coordination of information security and privacy efforts across the corporation. Although FDIC developed and implemented elements of its information security program, the corporation did not always implement key program activities. Additionally, FDIC has designed and documented numerous information security controls intended to protect its key financial systems; however, shortcomings existed in the implementation of other information security controls. By mitigating known information security weaknesses and ensuring that information security controls are consistently applied, FDIC could continue to reduce risks and better protect its sensitive financial information and resources from inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. An entity-wide information security management program is the foundation of a security control structure and a reflection of senior management’s commitment to addressing security risks. The security management program should establish a framework and continuous cycle of activity for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, or improperly implemented; and controls may be inconsistently applied. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes plans for providing adequate information security for networks, facilities, and systems; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; policies and procedures that (1) are based on risk assessments, (2) cost effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in its information security policies, procedures, or practices; and periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems. FDIC made improvements in developing and documenting many elements of its corporate information security program. For example: During 2014, FDIC completed actions to address two weaknesses we previously reported related to system security planning. Specifically, the corporation had ensured that the system security plans for three applications thoroughly described each control and included all required information, and ensured that descriptions of common controls were adequately documented. During 2014, FDIC addressed our prior recommendation to ensure that those with administrative-level access have completed the requisite rules of behavior training upon receiving access and each year after that. FDIC had taken steps to document, implement, and improve policies for information security. Specifically, in June 2014, the corporation formalized a new policy on security patch management. The policy defines risk categories for patches, time frames for applying patches based on the risk categories and platform-specific requirements, and roles and responsibilities for patch management. Additionally, the FDIC Office of the Inspector General reported in November 2014 that FDIC had drafted a new information technology (IT) security risk management program policy that was designed to align with NIST and Office of Management and Budget guidance and reflect the corporation’s information security risk management program and governance structure. FDIC addressed many weaknesses that we previously identified in its information systems supporting financial processing. Specifically, during 2014, FDIC implemented 27 of the 36 recommendations pertaining to unaddressed security weaknesses that we previously reported, and actions to correct or mitigate the remaining 9 weaknesses were in progress. Although FDIC continues to improve its implementation of its corporate information security program, shortcomings existed in other information security program elements. Specifically: FDIC’s information security policies and procedures did not always include important requirements. NIST special publication (SP) 800-53 Revision 4 recommends that agencies regularly review individuals’ physical access to facilities and remove that access when it is no longer required. However, although FDIC had a policy on controlling physical access to its primary data center, the corporation did not recertify access to its backup data center because the policy did not apply to all FDIC data centers. Additionally, although FDIC policy states that access to IT resources is to be provided only after proper authorization has been provided, the corporation did not document that it had verified access to a system supporting the marketing of failed banks’ assets because its existing procedures did not require the access verifications to be documented. As a result, there is an increased risk that individuals who no longer need access to information systems could accidentally or intentionally damage critical resources. Improvements are needed in the corporation’s continuous monitoring program. To its credit, the corporation conducted control assessments of the major applications and general support systems we reviewed and addressed a weakness we previously identified by assessing security controls for two systems in accordance with its assessment schedule. In addition, the FDIC Office of Inspector General reported in November 2014 that FDIC had performed a number of continuous monitoring activities, developed an assessment methodology that defined a risk-focused approach for performing continuous monitoring on information systems, and reported various continuous monitoring metrics to senior managers. However, the office reported that the corporation had not yet developed a written, corporate-wide information security continuous monitoring strategy that included key elements from NIST guidance for continuous monitoring. The office recommended that FDIC develop and approve a written continuous monitoring strategy consistent with Office of Management and Budget and NIST guidance. Until this recommendation is addressed, FDIC will have limited assurance that the controls are operating effectively to protect its financial systems and information. Improvements are needed in the corporation’s remedial action processes. Specifically, at the time of our review, 25 of the 285 remedial action plans applicable to agency information systems in our audit scope were past their expected closure dates by between about 2 weeks and 10 months, including 4 high-risk items. In addition, the FDIC Office of Inspector General reported in November 2014 that, as of July 2014, the corporation’s remedial action management system contained a large number of high- and moderate-risk security vulnerabilities, many of which had planned corrective actions that were significantly past their scheduled completion dates. The Office also reported that FDIC has taken steps to improve its remedial action processes by creating a strategy outlining planned actions to address weaknesses in the corporation’s plan of action and milestones processes. Nevertheless, until FDIC fully addresses shortcomings in its remedial action processes, the corporation’s financial information and systems will remain at increased and unnecessary risk. Because the FDIC Office of Inspector General has already made recommendations to address shortcomings in FDIC’s remedial action processes, we are not making additional recommendations in this area. An agency can protect the resources that support its critical operations and assets from unauthorized access, disclosure, modification, or loss by designing and implementing controls for segregating incompatible duties, identifying and authenticating users, restricting user access to only what has been authorized, encrypting sensitive data, auditing and monitoring systems to detect potentially malicious activity, managing and controlling system configurations, and conducting employee background investigations, among other things. Although FDIC had implemented numerous controls in these areas, weaknesses continue to challenge the corporation in ensuring the confidentiality, integrity, and availability of its information and information systems. To reduce the risk of error or fraud, duties and responsibilities for authorizing, processing, recording, and reviewing transactions should be separated to ensure that one user does not control all of the critical stages of a process. NIST SP 800-53 Revision 4 states that, to prevent malevolent activity without collusion, organizations should separate the duties of users as necessary and implement separation of duties through defined information system access authorizations. Additionally, consistent with NIST guidance, FDIC policy on access control for IT resources states that, where required, access controls shall be used to enforce the principle of separation of duties to restrict the level of access and ability provided to any single user. FDIC improved its implementation of segregation of duties controls by implementing four recommendations we had previously made pertaining to segregation of duties. For example, FDIC identified and documented incompatible roles and established processes and procedures to enforce segregation of duties for several applications and systems supporting financial processing. Additionally, the corporation had restricted users with access to source code for a financial system from having access to that system’s production environment. As a result, FDIC has reduced its risk that users could conduct fraudulent activity by bypassing intended controls. Information systems need to effectively control user accounts and identify and authenticate users. Users and devices should be appropriately identified and authenticated through the implementation of adequate logical access controls. Users can be authenticated using mechanisms such as a password and user ID combination. Consistent with NIST SP 800-53 Revision 4, FDIC policy establishes minimum password length and complexity requirements. During 2014, FDIC improved controls for identifying and authenticating the identity of users by implementing two recommendations that we had previously made and that were still unresolved as of December 31, 2013. For example, FDIC had disallowed the use of default credentials for access to an application supporting FDIC’s process for managing cash and investment transactions and had provided password lifetime and complexity controls to user accounts for a database that supported financial processing. However, FDIC did not fully implement password controls on the application supporting its process for managing cash and investment transactions in accordance with its policy. Specifically, passwords for the application did not comply with the minimum length standards established by FDIC’s password policy. As a result, there is an increased likelihood that passwords could potentially be compromised and used to gain unauthorized access to financial information in the application. Authorization encompasses access privileges granted to a user, program, or process. It is used to allow or prevent actions by that user based on predefined rules. Authorization includes the principles of legitimate use and least privilege. NIST SP 800-53 Revision 4 recommends that organizations employ the principle of least privilege by allowing only authorized access for users (or processes acting on behalf of users) which are necessary to accomplish assigned tasks in accordance with organizational missions and business functions, periodically review the privileges assigned to users to validate the need for such privileges and reassign or remove privileges when necessary, and disable access to information systems within a defined period when individuals are terminated. NIST also recommends that organizations develop, approve, and maintain a list of individuals with authorized access to facilities where information systems reside, periodically review the list, and remove individuals from the list when access to the facility is no longer required. Consistent with NIST guidance, FDIC policy also states that access to IT resources shall be terminated immediately after an employee or contractor exits the FDIC and that periodic reviews of access settings shall be conducted to ensure that appropriate controls remain consistent with existing authorizations and current business needs. During 2014, FDIC improved controls for authorizing users’ access by implementing 11 of 12 recommendations we had previously made pertaining to authorization controls and that were still unresolved as of December 31, 2013. For example, FDIC had ensured that accounts belonging to users who had not accessed certain applications and systems in a predefined period of time were disabled, discontinued the use of shared user IDs for several applications and databases supporting financial processing, removed certain users’ excessive access to an application supporting FDIC’s process for estimating potential losses from litigation, and restricted access to the network shared folder where annual financial statements and footnotes were maintained. Although improvements were made, FDIC did not always implement sufficient authorization controls. For example, as discussed earlier, the corporation did not recertify the need for data center access on a periodic basis to ensure that individuals’ access remained appropriate, and did not always recertify account access to an application used by FDIC to store loan data for failing financial institutions. Additionally, the corporation had not yet completed actions to implement our prior-year recommendation to ensure that accounts for users who have access to the network and who have been separated from employment are removed immediately upon separation. Although these weaknesses did not materially impact FDIC’s financial statements, they nevertheless increase the risk that individuals may have greater access to data centers or to financial information and systems than they need to fulfill their responsibilities, or that user accounts for departed individuals could be used to gain unauthorized access to systems that process sensitive financial information. Cryptography controls can be used to help protect the integrity and confidentiality of data and computer programs by rendering data unintelligible to unauthorized users and/or protecting the integrity of transmitted or stored data. Cryptography involves the use of mathematical functions called algorithms and strings of seemingly random bits called keys to, among other things, encrypt a message or file so that it is unintelligible to those who do not have the secret key needed to decrypt it, thus keeping the contents of the message or file confidential. NIST Special Publication 800-53 Revision 4 recommends that organizations employ cryptographic mechanisms to prevent unauthorized disclosure of information during transmission, encrypt passwords while being stored and transmitted, and establish a trusted communications path between users and security functions of information systems. The NIST standard for an encryption algorithm is Federal Information Processing Standard (FIPS) 140-2. FDIC improved its encryption controls by implementing our prior-year recommendation to use FIPS 140-2-compliant encryption for protection of authentication and session data for two systems supporting financial processing. However, FDIC had not completed actions to implement our prior recommendation to use FIPS-compliant encryption for all mainframe connections. As a result, sensitive data transmitted over these connections could be exposed to potential compromise. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity, and the appropriate investigation and reporting of such activity. Automated mechanisms may be used to integrate audit monitoring, analysis, and reporting into an overall process for investigation and response to suspicious activities. Audit and monitoring controls can help security professionals routinely assess computer security, perform investigations during and after an attack, and even recognize an ongoing attack. Audit and monitoring technologies include network and host-based intrusion detection systems, audit logging, security event correlation tools, and computer forensics. NIST SP 800-53 revision 4 states that organizations should review and analyze information system audit records for indications of inappropriate or unusual activity and report the findings to designated agency officials. Additionally, NIST states that audit records should contain information on individual audit events, including their type, source, and outcome, as well as the date and time that they occurred and any individuals or subjects associated with the events, among other things. FDIC improved its audit and monitoring controls by implementing three of six recommendations pertaining to audit and monitoring that we had previously identified and that were still unresolved as of December 31, 2013. For example, the corporation had ensured that log history for privileged accounts on key servers supporting financial processing were sufficient to aid incident response and forensic investigations. However, FDIC had not yet completed actions to address three weaknesses related to auditing and monitoring controls we previously identified. For example, the corporation had not yet ensured that, for certain systems, sensitive and high-risk events are consistently logged. In addition, FDIC did not always effectively monitor server security logs. Specifically, three servers supporting financial processing did not send log output to the corporation’s centralized audit logging system. While the three outstanding recommendations and the additional weakness identified this year did not materially affect the corporation’s financial statements, they nevertheless increase the risk that the incident response team would not detect malicious activity occurring on these systems supporting financial processing, or that sufficient data would not be available, hindering efforts to investigate potential security incidents after the fact. Configuration management is an important control that involves the identification and management of security features for all hardware and software components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. Configuration management involves, among other things, (1) verifying the correctness of the security settings in the operating systems, applications, or computing and network devices and (2) obtaining reasonable assurance that systems are configured and operating securely and as intended. Patch management, a component of configuration management, is important for mitigating the risks associated with software vulnerabilities. When a software vulnerability is discovered, the software vendor may develop and distribute a patch or work-around to mitigate the vulnerability. Without the patch, an attacker can exploit the vulnerability to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other systems. NIST SP 800-53 Revision 4 states that organizations should establish a baseline configuration for the information system and its constituent components. Additionally, FDIC policy states that FDIC must establish and document mandatory configuration settings for IT products employed within the information system using information-system-defined security configuration checklists. Further, NIST SP 800-128 states that patch management procedures should define how the organization’s patch management process is integrated into configuration management processes, how patches are prioritized and approved through the configuration change control process, and how patches are tested for their impact on existing secure configurations. Although improvements were made, shortcomings remain in FDIC’s implementation of configuration management controls. FDIC had made progress toward addressing our prior recommendation to establish baseline configurations for all FDIC information systems by establishing agency-wide configuration settings for three platforms. According to officials, FDIC plans to establish baselines for the majority of its platforms by the end of 2015. In addition, FDIC had begun to implement actions intended to improve its process for managing vulnerabilities and applying patches, including establishing a Patch and Vulnerability Group to facilitate the identification and distribution of patches; however, the corporation had not yet completed actions to address our prior recommendation to apply patches to remediate known vulnerabilities in third-party software. These issues did not materially affect the corporation’s financial statements. Nevertheless, until our previously identified weaknesses are addressed, FDIC faces increased risk that unpatched vulnerabilities in systems and applications could be exploited, potentially exposing the corporation’s financial systems and information to unauthorized access or modification. Policies related to hiring and management of personnel are important considerations in securing information systems. If personnel policies are not adequate, an entity runs the risk of (1) hiring unqualified or untrustworthy individuals; (2) providing terminated employees opportunities to sabotage or otherwise impair entity operations or assets; (3) failing to detect continuing unauthorized employee actions; (4) lowering employee morale, which may in turn diminish employee compliance with controls; and (5) allowing staff expertise to decline. Personnel procedures should include contacting references, performing background investigations, and ensuring that periodic reinvestigations are consistent with the sensitivity of the position, in accordance with criteria from the Office of Personnel Management. FDIC policy states that personnel in moderate- and low-risk positions should be subject to a background reinvestigation every 5 and 7 years, respectively. In 2014, we reported that background reinvestigations were not being performed in accordance with FDIC policy; specifically, background reinvestigations had not been performed prior to Fall 2013 for users with a security rating less than high risk. During our current review, FDIC officials stated that their planned efforts to address this weakness will not be completed until April 2016. Until this weakness is fully addressed, FDIC will continue to face elevated risk that it will not identify malicious users of financial applications who would commit or attempt to commit fraud. FDIC had developed, documented, and implemented many elements of its corporate information security program. For example, the corporation had formalized a new policy for information security patch management and had ensured that administrators completed required training. In addition, FDIC had implemented and strengthened many information security controls over its financial systems and information. For example, the corporation had taken steps to improve controls for segregating incompatible duties, identifying and authenticating users, restricting user access to only what has been authorized, encrypting of sensitive data, and auditing and monitoring systems for potentially malicious activity, by addressing many of the weaknesses that we previously reported. However, management attention is still needed to address shortcomings in the corporation’s information security program. For example, shortcomings in certain security policies and procedures led to weaknesses in conducting and documenting reviews of user access. Additionally, further actions are needed to address weaknesses in identification and authentication, authorization, and audit and monitoring controls. Given the important role that information systems play in FDIC’s internal controls over financial reporting, it is vitally important that FDIC address the remaining weaknesses in information security controls—both old and new—as part of its ongoing efforts to mitigate the risks from cyber attacks and to ensure the confidentiality, integrity, and availability of its financial and sensitive information. Although we do not consider these weaknesses individually or collectively to be either a material weakness or a significant deficiency for financial reporting purposes, we are nevertheless making five recommendations in a separate product with limited distribution for FDIC to address new weaknesses we identified in this review. Until FDIC takes further steps to mitigate these weaknesses, the corporation’s sensitive financial information and resources will remain unnecessarily exposed to increased risk of inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. To help improve the corporation’s implementation of its information security program, we recommend that the Chairman of FDIC direct the Chief Information Officer to take the following two actions: Ensure that physical access policies require periodic review of access to all FDIC data centers. Update existing procedures to require that access verifications to the system supporting the marketing of failed banks’ assets be documented. Additionally, in a separate report with limited distribution, we are making five recommendations consisting of actions to implement and correct specific information security weaknesses related to identification and authentication, authorization, and audit and monitoring. In providing written comments (reprinted in app. II) on a draft of this report, FDIC stated that corrective actions for the two new recommendations have already been or will be completed during 2015. FDIC also provided an attachment detailing its actions to implement our recommendations as well as technical comments that we addressed in our report as appropriate. We are sending copies of this report to interested congressional parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512- 4499. We can also be reached by e-mail at [email protected] and [email protected]. Key contributors to this report are listed in appendix III. The objective of this information security review was to determine the effectiveness of the Federal Deposit Insurance Corporation’s (FDIC) controls in protecting the confidentiality, integrity, and availability of its financial systems and information. The review was conducted as part of our audit of the FDIC financial statements of the Deposit Insurance Fund and the Federal Savings and Loan Insurance Corporation Resolution Fund. The scope of our audit included an examination of FDIC information security policies and plans; controls over key financial systems; and interviews with agency officials in order to (1) assess the effectiveness of corrective actions taken by FDIC to address weaknesses we previously reported and (2) determine whether any additional weaknesses existed. This work was performed in support of our opinion on internal control over financial reporting as it relates to our audits of the calendar year 2014 and 2013 financial statements of the two funds administered by FDIC. GAO used an independent public accounting firm, under contract, to evaluate and test certain FDIC information systems controls, including the follow-up on the status of FDIC’s corrective actions during calendar year 2014 to address open recommendations from our prior years’ reports. We agreed on the scope of the audit work, monitored the firm’s progress, and reviewed the related audit documentation to determine whether the firm’s findings were adequately supported. To determine whether controls over key financial systems and information were effective, we considered the results FDIC’s actions to mitigate previously reported weaknesses that remained open as of December 31, 2013, and performed audit work at FDIC facilities in Arlington, Virginia. We concentrated our evaluation primarily on the controls for systems and applications associated with financial processing. Our selection of the systems to evaluate was based on consideration of systems that directly or indirectly support the processing of material transactions that are reflected in the funds’ financial statements. Our audit methodology was based on the Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using standards and guidance from the National Institute of Standards and Technology as well as FDIC’s policies and procedures, we evaluated controls by examining access responsibilities to determine whether incompatible functions were segregated among different individuals; reviewed password settings to determine if password management was being enforced in accordance with agency policy; analyzing user system authorizations to determine whether users had more permissions than necessary to perform their assigned functions; observing methods for providing secure data transmissions to determine whether sensitive data were being encrypted; assessing configuration settings to evaluate settings used to audit inspecting vulnerability scans for in-scope systems to determine whether patches, service packs, and hot fixes were appropriately installed on affected systems. Using the requirements of the Federal Information Security Management Act of 2002, which establishes elements for an agency-wide information security program, we evaluated FDIC’s implementation of its security program by analyzing security plans for key financial systems to determine whether management, operational, and technical controls had been documented and whether security plans had been updated regularly in accordance with NIST requirements; reviewing training records for administrators to determine if they had received training appropriate to their responsibilities; reviewing information security policies to determine whether they were adequately documented and implemented; examining an FDIC Office of Inspector General report for information on FDIC’s implementation of risk management policies; reviewing ongoing assessments of security controls to determine if they had been completed as scheduled; reviewing an FDIC Office of Inspector General report for information on the corporation’s information security continuous monitoring program; examining remedial action plans to determine whether FDIC addressed identified vulnerabilities in a timely manner; and examining an FDIC Office of Inspector General report for information on findings related to FDIC’s remedial action process. To determine the status of FDIC’s actions to correct or mitigate previously reported information security weaknesses, we reviewed prior GAO reports to identify previously reported weaknesses and examined FDIC’s corrective action plans to determine which weaknesses FDIC had reported as being corrected. For those instances where FDIC reported it had completed corrective actions, we assessed the effectiveness of those actions. We performed our work from June 2014 to April 2015 in accordance with U.S. generally accepted government auditing standards. We believe that our audit work provided a reasonable basis for our conclusion in this report. In addition to the individuals named above, Gary Austin and Nick Marinos (assistant directors), William Cook, Thomas J. Johnson, George Kovachick, and Lee McCracken made key contributions to this report.
FDIC has a demanding responsibility enforcing banking laws, regulating financial institutions, and protecting depositors. Because of the importance of FDIC's work, effective information security controls are essential to ensure that the corporation's systems and information are adequately protected from inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. As part of its audits of the 2014 financial statements of the Deposit Insurance Fund and the Federal Savings and Loan Insurance Corporation Resolution Fund administered by FDIC, GAO assessed the effectiveness of the corporation's controls in protecting the confidentiality, integrity, and availability of its financial systems and information. To do so, GAO examined security policies, procedures, reports, and other documents; tested controls over key financial applications; and interviewed FDIC personnel. The Federal Deposit Insurance Corporation (FDIC) has implemented numerous information security controls intended to protect its key financial systems; nevertheless, weaknesses remain that place the confidentiality, integrity, and availability of financial systems and information at risk. During 2014, the corporation implemented 27 of the 36 GAO recommendations pertaining to previously reported security weaknesses that were unaddressed as of December 31, 2013; actions to implement the remaining 9 recommendations were in progress. The table below details the status of these recommendations. Although FDIC developed and implemented elements of its information security program, shortcomings remain in key program activities. For example: FDIC had taken steps to improve its security policies and procedures, but important activities were not always required by its policies. For example, although FDIC had a policy on controlling physical access to its primary data center, the policy did not apply to all FDIC data centers. FDIC did not consistently remediate agency-identified weaknesses in a timely manner. However, to its credit, the corporation created a strategy outlining planned actions to address weaknesses in its remedial action processes. Additionally, FDIC has designed and documented numerous information security controls intended to protect its key financial systems; nevertheless, controls were not always consistently implemented. For example, the corporation had not always (1) ensured that passwords for a financial application complied with FDIC policy for password length or (2) centrally collected audit logs on certain servers. These weaknesses individually or collectively do not constitute either a material weakness or a significant deficiency for financial reporting purposes. Nonetheless, by mitigating known information security weaknesses and consistently applying information security controls, FDIC could continue to reduce risks and better protect its sensitive financial information and resources from inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. GAO is making two recommendations to FDIC to improve its implementation of its information security program. FDIC concurred with GAO's recommendations. In a separate report with limited distribution, GAO is recommending that FDIC take five specific actions to address weaknesses in security controls.
Sudan is the largest country in Africa (see fig. 1), and its population, estimated at about 40 million, is one of the continent’s most diverse. Sudan’s population comprises two distinct major cultures, Arab and black African, with hundreds of ethnic and tribal subdivisions and language groups. More than half of Sudan’s population lives in the northern states, which make up most of Sudan and include the majority of the urban centers; most residents of this area are Arabic-speaking Muslims. Residents of the southern region, which has a predominantly rural, subsistence economy, practice mainly indigenous traditional beliefs, although some are Christian. The South contains many tribal groups and many more languages than are used in the North. Darfur is another distinct region of Sudan, located in the west, and was an independent sultanate for most of the period between 1600 and 1916, when the British captured it and incorporated it into the Sudanese state. Darfur’s population is predominantly Muslim. For most of its existence since gaining independence from Britain and Egypt in 1956, Sudan has endured civil war rooted in cultural and religious divides. The North, which has traditionally controlled the country, has sought to unify it along the lines of Arabism and Islam, whereas non- Muslims and other groups in the South have sought, among other things, greater autonomy. After 17 years of war, from 1955 to 1972, the government signed a peace agreement that granted the South a measure of autonomy. However, civil war began again in 1983, when the President of Sudan declared Arabic the South’s official language, transferred control of Southern armed forces to the central government, and, later that year, announced that traditional Islamic punishments drawn from Shari’a (Islamic law) would be incorporated into the penal code. The South’s rebellion was orchestrated by the Sudan People’s Liberation Movement/Army (SPLM/A). In 1989, the conflict intensified when an Islamic army faction, led by General Omar Hassan al-Bashir, led a coup of the government and installed the National Islamic Front. In 2001 President Bush named former Senator John Danforth as his Presidential Envoy for Peace in Sudan, assigning him to explore a U.S. role in ending the civil war and enhance the delivery of humanitarian aid to Sudan’s affected population. On January 9, 2005, the Sudanese government and the SPLM/A signed a set of agreements called the Comprehensive Peace Agreement, providing for a new constitution and new arrangements for power sharing, wealth sharing, and security applicable throughout Sudan. On July 9, 2005, Bashir assumed the presidency under the new arrangements, with the SPLM/A Chairman assuming the office of First Vice President. In 2011, Southern Sudan will hold a vote to determine whether to become independent. To assist in implementing the peace agreement, the UN Security Council established the UN Mission in Sudan (UNMIS), which currently has a force of more than 7,000. While the North-South agreement was nearing completion, a rebellion broke out in Darfur, located in western Sudan with an estimated preconflict population of about 6 million (see fig. 2). The South’s success motivated rebel groups in Darfur to fight for a similar outcome. In early 2003, Darfur rebels attacked Sudanese police stations and the airport in El Fasher, the capital of North Darfur (see fig. 3 for an interactive timeline of key events associated with Darfur and app. II for a related description of events). In El Fasher, the rebel groups destroyed numerous military aircraft, killed several Sudanese soldiers, and kidnapped a Sudanese general. After the government armed and supported local tribal and Arab militias—the Janjaweed—fighting between the rebel groups and the Sudan military and Janjaweed intensified during late 2003. The principal rebel groups, the Sudan Liberation Movement/Army (SLM/A) and the Justice and Equality Movement (JEM), represent agrarian farmers who are black African Muslims. The SLM/A has recently split into two factions—one faction, with the larger military force, led by Minni Minawi and the other led by Abdulwahid El Nour. In addition to disrupting the lives of almost 4 million Darfurians, Janjaweed and Sudanese government attacks in Darfur have resulted in many thousands of deaths. The Agreement on Humanitarian Ceasefire was signed by the Sudanese government, the SLM/A and the JEM on April 8, 2004, in N’Djamena, Chad. In signing the agreement, the parties agreed to accept an automatically renewable cessation of hostilities; to refrain from any military action and any reconnaissance operations; to refrain from any act of violence or any other abuse on civilian populations; to ensure humanitarian access; and to establish a Ceasefire Commission to monitor the agreement, along with a Joint Commission to which the Ceasefire Commission would report. The African Union was to monitor cease-fire compliance. Peace negotiations continued under African Union auspices with Chadian participation, and additional interim agreements were also reached. However, after a relatively calm 2005, cease-fire violations and violent incidents reportedly began to increase in the final months of that year and into 2006. On May 5, 2006, the government of Sudan and the Minawi faction of the SLM/A signed the Darfur Peace Agreement, establishing agreements in key areas such as power sharing, wealth sharing, and security arrangements. Power sharing. The Darfur Peace Agreement creates the position of Senior Assistant to the President—the fourth-highest position in the Sudanese government—appointed by the President from a list of nominees provided by the rebel movements. The Senior Assistant to the President will also serve as Chairperson of the newly created Transitional Darfur Regional Authority, which is responsible for the implementation of the agreement and coordination among the three states of Darfur. Further, a referendum will be held by July 2010 to allow Darfurians to decide whether to establish Darfur as a unitary region with a single government or to retain the existing three regions. Wealth sharing. The Darfur Peace Agreement creates a Darfur Reconstruction and Development Fund to collect and disburse funds for the resettlement, rehabilitation, and reintegration of internally and externally displaced persons. The government of Sudan will contribute $300 million to the fund in 2006 and at least $200 million annually in 2007 and 2008. Further, the government of Sudan will place $30 million in a fund for monetary compensation for those negatively affected by the conflict in Darfur. Security arrangements. The Darfur Peace Agreement calls for the verifiable disarmament of the Janjaweed by the Sudanese government by mid-October 2006. This disarmament must be verified by the African Union before rebel groups undertake their own disarmament and demobilization. Demilitarized zones are to be established around IDP camps and humanitarian assistance corridors, into which rebel forces and the Sudanese military cannot enter, and buffer zones are to be established in the areas of the most intense conflict. Rebel group forces will be integrated into the Sudanese military and police: 4,000 former combatants will be integrated into the armed forces; 1,000 former combatants will be integrated into the police; and 3,000 will be supported through education and training programs. The UN estimates that displaced and affected persons are located in more than 300 locations, including camps and other gatherings, with populations up to 90,000 people. Figure 4 shows the camp dispersion and estimated population at many of the camps throughout Darfur, as of October 2005. Since 2004, the African Union has been responsible for peace support operations in Darfur through AMIS. Subsequent to its establishment of an African Union observer mission in Darfur in May 2004, the African Union Peace and Security Council established a specific mandate for AMIS in October 2004 (see app. III for a discussion of the evolution of AMIS). AMIS’s mandate has three components: To monitor and observe compliance with the April 2004 humanitarian cease-fire agreement and all such agreements in the future; To assist in the process of confidence building; and To contribute to a secure environment for the delivery of humanitarian relief and, beyond that, the return of IDPs and refugees to their homes, in order to assist in increasing the level of compliance of all parties with the April 2004 cease-fire agreement and to contribute to the improvement of the security situation throughout Darfur. Regarding the first component of the mandate, per the terms of the cease- fire agreement, related agreements, and African Union Peace and Security Council guidance, military observers were to investigate and report on allegations of ceasefire violations, with a protection force presence as needed. Final investigation reports, prepared by the Ceasefire Commission headquartered in El Fasher, Darfur, were to be submitted to the Joint Commission. The Joint Commission was mandated to make consensus- based decisions on matters brought before it by the Ceasefire Commission. According to a senior African Union official, the Joint Commission was to submit Ceasefire Commission reports to African Union headquarters in Addis Ababa, Ethiopia, for appropriate action. (Fig. 5 illustrates the established process for investigating, and reporting on, cease-fire agreement violations.) The council determined that AMIS would, in the framework of its mandate, “protect civilians whom it encounters under imminent threat and in the immediate vicinity, within resources and capability, it being understood that the protection of the civilian population is the responsibility of the government of Sudan.” The council also determined that AMIS would have, in addition to military observers and protection force troops, civilian police, to monitor the actions of Sudanese police and interact with IDPs and civilians, as well as appropriate civilian personnel. The AMIS force authorized and deployed in Darfur to execute its mandate has grown incrementally over time from several hundred personnel in 2004 to 7,271 personnel (military observers, protection force troops, and civilian police) deployed as of April 30, 2006. Numerous studies by the African Union, the UN, and others reviewing the performance of AMIS have been conducted that discuss the operations of this effort undertaken by the newly created African Union (see the bibliography for a listing of these reviews). The May 2006 Darfur Peace Agreement establishes several new responsibilities for AMIS, such as verifying the eventual disarmament of the Janjaweed by the Sudanese government. The 2006 agreement also designates AMIS as responsible for actions such as designing and running awareness programs in Darfur to ensure that local communities and others understand, among other things, the AMIS mandate; patrolling and monitoring demilitarized zones around IDP camps; patrolling buffer zones established in areas of the most intense conflict; and developing and monitoring implementation of a plan for the regulation of nomadic migration along historic migration routes. The U.S. government has been active in addressing the Darfur conflict. After the conflict began, senior State officials traveled to Sudan on a half- dozen occasions, stressing the need to end the violence. On July 22, 2004, the U.S. House and the Senate each passed separate resolutions citing events in Darfur as acts of genocide. Further, on September 9, 2004, in testimony before the Senate Foreign Relations Committee, the U.S. Secretary of State announced that “genocide” had been committed in Darfur, and noted that the Sudanese government had supported the Janjaweed, directly and indirectly, as they carried out a “scorched earth” policy toward the rebels and the African civilian population in Darfur. In a press release the same day, President Bush stated that genocide was occurring and requested the UN to investigate events in Darfur, as the Secretary of State had also done. On October 13, 2006, President Bush signed into law the Darfur Peace and Accountability Act of 2006, which imposes sanctions against persons responsible for genocide, war crimes, and crimes against humanity; supports measures for the protection of civilians and humanitarian operations; and supports peace efforts in Darfur. Although the UN has not identified the events in Darfur as genocide, it has repeatedly expressed concern over the continuing violence. In July 2004, the UN, with the government of Sudan, issued a communiqué emphasizing a commitment to facilitating humanitarian assistance to the region and establishing a commitment by the Sudanese government to disarm the Janjaweed. In September 2004, the UN Security Council adopted a resolution stating that the UN Secretary-General should “rapidly establish an international commission of inquiry in order immediately to investigate reports of violations of international humanitarian law and human rights law in Darfur by all parties, to determine also whether or not acts of genocide have occurred, and to identify the perpetrators of such violations with a view to ensuring that those responsible are held accountable.” In January 2005, the UN issued a report stating that “the Government of Sudan and the Janjaweed are responsible for serious violations of international human rights and humanitarian law amounting to crimes under international law.” The report concluded that a policy of genocide had not been pursued but noted that “the crimes against humanity and war crimes that have been committed in Darfur may be no less serious and heinous than genocide.” The UN Security Council has also adopted resolutions establishing a travel ban and asset freeze for those determined to impede the peace process or violate human rights and referring the situation in Darfur to the prosecutor of the International Criminal Court and calling on the government of Sudan and all other parties to the conflict to cooperate with the court. Further, in creating UNMIS to support implementation of the Comprehensive Peace Agreement, the council requested the UN Secretary- General to report to the council on options for the mission to reinforce the effort to foster peace in Darfur through appropriate assistance to AMIS. Large-scale international humanitarian response to the displacement in Darfur did not begin until fiscal year 2004. In October 2003, USAID’s Office of Food for Peace began to contribute food aid to the UN World Food Program for distribution in Darfur and USAID set an internal goal of meeting at least 50 percent of Sudan’s food aid needs as assessed by the World Food Program. In addition, USAID’s Office of Foreign Disaster Assistance established a Disaster Assistance Response Team in Darfur to respond to the humanitarian needs of the population affected by the conflict once the cease-fire agreement was signed. The United States was the largest donor of humanitarian assistance for Darfur in fiscal years 2004 to 2006, providing approximately 47 percent of all humanitarian assistance to the region (the UN has reported $1.9 billion in total pledges and obligations of assistance from all donors). The European Union and the United Kingdom provided the largest amounts of assistance pledged or obligated by other international donors. Figure 6 shows the percentages of total humanitarian assistance pledged or obligated for Darfur by international donors. In fiscal years 2004 through 2006, the United States provided almost $1 billion for food and other humanitarian aid in Darfur. More than 68 percent of the U.S. obligations as of September 30, 2006, supplied food aid in the form of commodities provided to the UN World Food Program and the International Committee of the Red Cross. In addition, the United States provided assistance to meet a range of nonfood needs, such as health care and water. During this period, humanitarian access and coverage for IDPs and affected residents of Darfur improved significantly. In addition, IDP malnutrition and mortality rates decreased over time, a trend that U.S., UN, and other officials attribute in part to humanitarian assistance. U.S. obligations for food and other humanitarian aid in Darfur totaled approximately $996 million in fiscal years 2004 through 2006 (see fig. 7). From 2004 to 2005, obligations for food and nonfood assistance increased from about $186 million to $444 million, or by 58 percent. In fiscal year 2006, obligations decreased to about $366 million, or by 18 percent. Funds provided in supplemental appropriations accounted for about $71 million— 16 percent of the total—in 2005 and $205 million—56 percent of the total—in 2006. For fiscal years 2004 through 2006, USAID provided $681 million (over 68 percent) as food aid for Darfur—approximately $113 million in 2004, $324 million in 2005, and $243 million in 2006 (see table 1). As table 1 shows, after rising from fiscal year 2004 to fiscal year 2005, U.S. food aid funding for Darfur decreased from fiscal year 2005 to fiscal year 2006 by approximately 25 percent and the quantity of food provided decreased by almost 13 percent. The UN World Food Program planned assistance to Sudan also fell by more than 16 percent between calendar years 2005 and 2006, while the food aid component of planned assistance decreased by 29 percent. According to World Food Program and USAID officials, in fiscal years 2005 and 2006, USAID supplied at least half of the 2005 and 2006 food aid assistance requested for Sudan by the UN World Food Program. A World Food Program official in Washington, D.C., stated that the U.S. government provided essential food aid contributions in fiscal year 2006 and that the reduction in the level of U.S. funding did not negatively impact the food situation in Darfur. USAID Food for Peace obligated aid for Darfur for fiscal years 2004 through 2006, primarily for commodities intended to meet minimum nutritional requirements, to the UN World Food Program and the International Committee of the Red Cross. Obligations to the UN World Food Program. As table 1 shows, USAID Food for Peace obligated $658.6 million for commodities, including transportation and other shipping costs, to the World Food Program to address emergency food needs in Darfur in fiscal years 2004 through 2006. According to a USAID official, this assistance included commodities previously allocated for assistance to southern Sudan, which Food for Peace and the World Food Program reallocated to respond to the emergency situation in Darfur before the official emergency program began. World Food Program officials said that U.S. food aid funding allowed the program to preposition food in various storage facilities in Darfur, enabling the program to avoid costly air drops. World Food Program officials indicated that prepositioning food helps avoid shortfalls during rainy seasons resulting from the typical 6- month time lag between confirmation and distribution of food aid donations. Obligations to the Red Cross. USAID Food for Peace obligated $22.8 million for commodities to the International Committee of the Red Cross. This assistance was intended particularly for rural village residents who had not been displaced by the ongoing conflict and whose needs had not been addressed by other agencies in the region. During our field work in Darfur, we visited World Food Program warehouses outside Nyala, in South Darfur, built to expedite the distribution of food aid during the rainy season; we observed local staff repackaging U.S. wheat from bags that were damaged in transit to the storage facility in Nyala (see fig. 8). Additionally, we witnessed NGOs distributing rations in Zam Zam IDP camp (although the funds and commodities are transferred to the UN World Food Program, NGOs operating in Darfur distribute the rations in IDP camps), where U.S.-provided sorghum, vegetable oil, lentils, and wheat were distributed as part of the monthly rations (see fig. 9). In addition to providing food aid, as of September 30, 2006, the United States had obligated approximately $315 million for other humanitarian assistance in a range of sectors, including shelter, water and sanitation, health care, and nutrition. This assistance was provided through USAID’s Office of Foreign Disaster Assistance and Office of Transition Initiatives as well as State’s Bureau of Population, Refugees and Migration. The U.S. government has provided nonfood assistance to the affected residents of Darfur through 31 NGOs and 10 UN agencies, which implement programs and activities to aid the people of Darfur (see app. IV for a list of NGOs and UN agencies that received U.S. nonfood assistance funding for fiscal years 2004 to 2006). Of this assistance, the largest amounts have been obligated for health care, water and sanitation, logistics, protection, and food security/agriculture (see fig. 10). Health. The United States obligated $57.4 million for the health sector, supporting activities such as medical clinics, immunizations, and maternal health care. We visited five NGO-operated health clinics in Darfur IDP camps. These clinics, which served between 110 to 1,200 IDPs per day, provided basic medical examinations, referring serious illnesses to Sudanese hospitals. The clinics also provided vaccinations, reproductive health services for pregnant women, and medical services for victims of gender-based violence (see fig. 11). Water and sanitation. The United States provided about $53.5 million for water and sanitation activities, which consisted of building and rehabilitating wells, installing hand pumps and latrines, and conducting hygiene programs. According to NGO officials, the Kalma camp water facilities we visited served approximately 45,000 IDPs and dispensed approximately 18 liters of chlorinated water per person per day (above the Sphere standard of 15 liters) to provide for IDPs’ personal needs and to allow them to water their animals. According to NGO officials, in Abu Shouk camp, a water tank and hand pumps provided 13.5 liters of water per person per day (see fig. 12). Protection and income-generation activities. The United States provided about $28.6 million for protection activities and $9.1 million for income- generation activities, which USAID and NGO officials indicated helped protect women and girls by minimizing their exposure to violence. We observed women building fuel-efficient stoves, which, by requiring less wood, are intended to reduce the frequency of women’s wood-collecting forays outside the camp and, thus, their vulnerability to attacks (see fig. 13). We also observed IDPs preparing goods that could be sold—including making baskets and other goods, preparing fresh pasta, and sewing garments—to provide sources of income that would reduce their need to go outside the camps to earn money. Literacy and educational training was also provided to IDPs in camps in conjunction with income-generation and protection activities. Since fiscal year 2004, when the United States and other international donors began providing humanitarian assistance, the numbers of humanitarian organizations and staff have grown, and the amount of humanitarian assistance and the coverage for IDPs and affected residents have increased. Also, since 2004, malnutrition and mortality rates among IDPs and affected residents have diminished. Increased presence of humanitarian organizations. According to UN and NGO officials, U.S. assistance contributed to growth in the number of humanitarian organizations and staff in Darfur. UN humanitarian profiles show that from April 2004 to July 2006, the number of international and national humanitarian aid workers in Darfur expanded from 202 to about 13,500 staff of 84 NGOs and 13 UN agencies. NGO and UN officials in Darfur indicated that the U.S. contribution was essential to their operations, in some cases making up the totality of their budget, and that they would be unable to provide services inside and outside the camps without U.S. funding. Increase coverage for affected residents and IDPs. Each aid sector in Darfur provided humanitarian assistance to increasing numbers of affected residents or IDPs between April 2004 and July 2006 (see fig. 14). The total affected population receiving assistance such as food, water, and health care increased, although substantial numbers of affected persons did not receive assistance, especially in inaccessible areas, owing to continued security concerns. In addition, after August 2005, the percentage of the targeted population receiving such assistance began to decrease, according to the UN, as continued conflict and insecurity in Darfur limited access to, and distribution of, humanitarian aid. NGOs and UN agencies reported that assistance provided only to IDPs also expanded. For example, the number of IDPs receiving sanitation assistance increased more than sixtyfold, from about 21,000 IDPs in April 2004 to more than 1.4 million IDPs in July 2006. Reduced malnutrition and mortality rates. Since 2004, malnutrition rates recorded in Darfur have decreased significantly. A UN World Food Program survey in Darfur showed that malnutrition rates were significantly lower in 2005 than in 2004. In addition, although nutrition among IDPs in Darfur remains precarious, UN nutritional reports show improvement since 2004 and attribute the improvement partly to external assistance and large-scale food aid. According to UN Emergency Food Security and Nutrition Assessments, the prevalence of global acute malnutrition in Darfur was reported at 11.9 percent in March of 2006, a significant decrease from the 21.8 percent reported in October 2004. Furthermore, several mortality surveys have concluded that mortality rates in Darfur decreased from 2004 to 2005. For example, surveys conducted by the World Health Organization and Médecins Sans Frontières (also known as Doctors Without Borders) reported mortality rates ranging between 1.5 to 9.5 deaths per 10,000 people per day in 2004. In September 2005, the UN World Food Program reported that the crude mortality rate in Darfur had dropped below the emergency threshold of 1 death per 10,000 persons per day, as defined by Sphere. Humanitarian assistance provided for Darfur by the United States and other international donors has been cited as contributing to improved mortality rates in Darfur. Experts and NGO, UN, and U.S. officials noted that other factors, such as reduced violence, can also contribute to a decrease in mortality rates. Despite the efforts of the humanitarian organizations to increase the numbers of people receiving humanitarian assistance, as well as provide assistance to help reduce malnutrition and mortality rates, the situation in Darfur remains precarious. Continued insecurity restricts humanitarian organizations’ access to IDPs and affected residents of Darfur. In addition, NGO and UN officials indicated that mortality and malnutrition rates would likely rise above emergency levels if necessary funding were not continued. Since the beginning of the humanitarian crisis in Darfur, entities delivering U.S. humanitarian assistance to affected residents and IDPs have faced numerous challenges. Continued insecurity in the region has limited the ability of NGOs and UN agencies to access parts of Darfur and reach all affected residents and IDPs. In addition, the Sudanese government and rebel groups have placed restrictions and requirements on NGOs that have severely limited the NGO staffs’ ability to travel to and in Darfur and to provide services to affected residents and IDPs. Further, the late timing of U.S. funding in 2006 initially limited the operations of NGOs and UN agencies and threatened to force some reduction in services in Darfur. Meanwhile, the large size of Darfur and the large quantity of U.S. humanitarian assistance have challenged USAID’s ability to ensure accountability for the assistance provided. In addition, targeting of humanitarian assistance for IDPs is complicated by the difficulty of counting and managing the numbers of people who receive assistance and their use of the goods provided. The frequent violence and continued conflict within all three Darfur states have negatively impacted the ability of NGOs and UN agencies to provide humanitarian assistance within Darfur. Attacks on, and harassment of, humanitarian staff, as well as banditry and theft of humanitarian convoys, have increased throughout Darfur since the beginning of the humanitarian response; and according to the UN, violence, sexual abuse, and displacement have dramatically increased since May 2006. NGO, UN, and U.S. personnel have been injured, abducted, and killed in attacks against the humanitarian community, and humanitarian staff have regularly reported harassment from Sudanese government officials. According to UN and USAID reports, UN and NGO humanitarian staff were attacked and harassed with increasing frequency in 2005, and NGO staff members were attacked and abducted. In several instances, drivers and other humanitarian staff were abducted or killed during attacks on humanitarian aid convoys. USAID reported more than 200 incidents of harassment, arrest, or attack against UN, NGO, or AMIS personnel in 2005. USAID and the UN also reported that increasing violence had resulted in the deaths of nine humanitarian staff in July 2006—more than the number of staff killed in the past 2 years. Further, in August 2006, the UN reported that attacks against humanitarian staff were at a record high. In addition, banditry and looting of NGO convoys occur with regularity, according to UN and USAID reports. USAID reported and some UN officials confirmed the theft of vehicles, cash, food, and other humanitarian aid. However, many NGO and UN officials told us that the banditry has mainly resulted in the theft of communications equipment and cash, rather than the humanitarian aid in the convoy. A World Food Program official estimated that less than 1 percent of total food aid in Darfur was lost to banditry, but that the incidents typically resulted in the theft of petty cash, fuel, or the trucks carrying the World Food Program supplies. Furthermore, humanitarian access to affected residents and IDPs has been curtailed as a result of continued conflict, especially in rural areas. USAID, NGO, and UN officials in Darfur stated that the lack of security has forced humanitarian organizations to limit access to insecure areas. For example, in response to continued attacks and insecurity in West Darfur, in January 2006, the UN Department of Security and Safety announced the withdrawal of UN staff from most of West Darfur for 2 months, and USAID also removed its staff from West Darfur. (Although UN access was restricted, some NGOs did not evacuate the area and were able to continue operations.) According to USAID, the situation dramatically curtailed the ability of organizations to access the affected residents and IDP population in the area and to implement life-saving programs in West Darfur. Additionally, the UN reported that, as a result of significant insecurity in North Darfur, approximately 460,000 Darfurians were cut off from emergency food aid in July 2006, and in August 2006, 355,000 Darfurians remained blocked from receiving food aid. According to the UN, as of August 2006, humanitarian aid organizations’ access to IDPs and affected residents in Darfur was at its lowest levels since 2003, and areas of inaccessibility were expanding. Meanwhile, an estimated 50,000 people were displaced between June and August 2006. The government of Sudan and, to a lesser extent, the rebel groups have hindered the humanitarian community from accessing affected residents and IDPs in Darfur. According to UN and NGO officials and USAID, as well as UN reports, the government of Sudan has restricted access to Darfur for NGOs and UN agencies since the initial international humanitarian response by delaying or denying visas and travel permits. NGO officials noted that issuance of visas for staff is often delayed or denied without explanation. In addition, according to NGO officials, although the government of Sudan requires NGO officials to purchase travel permits for all travel within Darfur, government police and other authorities do not always accept the permits and often deny access to NGO staff. According to USAID officials, in September 2006, the government of Sudan restricted movement of U.S. government personnel to within 25 miles of the presidential palace in Khartoum, which has forced USAID to remove all personnel from Darfur. This travel ban remained in place as of October 20, 2006. Rebel groups also place requirements on NGOs that delay transportation of humanitarian aid or services into rebel-controlled areas. For example, NGO and UN officials stated that they must contact numerous rebel leaders to safely transport humanitarian aid into a rebel- controlled area. Sudanese government officials in Darfur deny NGO and UN officials allegations that the government restricts access and travel in Darfur and insist that the government attempts to help NGOs and UN agencies provide assistance to the people of Darfur. However, USAID, NGO and UN officials indicated that although the Sudanese government has an official policy of cooperation with humanitarian assistance in Darfur, the government’s actions have severely limited humanitarian assistance within the region. Delayed provision of more than half of U.S. humanitarian aid for 2006 limited NGO and UN agency partners’ ability to supply needed food assistance and negatively affected their ability to plan for nonfood assistance. The initial U.S. appropriation for fiscal year 2006 supplied approximately 44 percent of the total U.S. humanitarian aid funding for Darfur in fiscal year 2006. With the passage of the supplemental appropriation on June 15, 2006—9 months into the fiscal year—total U.S. food and nonfood assistance for 2006 reached the intended levels, including meeting at least half of the World Food Program’s appeal for Sudan. However, because NGOs and UN agencies in Darfur did not receive the funds until late in the fiscal year, they were forced to reduce food rations and temporarily interrupt some humanitarian aid services. Impact on food assistance. The provision of approximately 56 percent of 2006 U.S. food aid funding late in the fiscal year made it difficult for the UN World Food Program to distribute supplies throughout Darfur in a timely fashion. In particular, because of the 6-month lag between confirmation and distribution of donations, the delay made it difficult for the program to preposition food prior to the rainy season, according to a World Food Program official. Owing in part to this delay, the program announced in April 2006 that, beginning in May, it would reduce rations in Darfur to half the minimum daily requirement (from 2,100 calories to as few as 1,050 calories per day) to extend limited food stocks. In response, the Sudanese government donated sorghum, and the President of the United States directed USAID to ship emergency food stockpiles to Darfur, raising the rations to 84 percent (1,770 kilocalories) of the daily requirement for Darfurians receiving food aid. In June, the cereal component of the ration was fully restored. However, as of October 2006, the World Food Program continued to face gaps in food aid, and, according to program officials, it planned to maintain the 84 percent ration through the end of the calendar year. According to a World Food Program official in Khartoum, if the current level of funding had been available earlier in the year, the ration cuts could have been avoided entirely. A USAID official told us that, although the reduction in 2006 U.S. funding did not significantly decrease the food aid contribution for Darfur, the delay of $137 million (56 percent) of the 2006 U.S. food aid funding until late in the fiscal year negatively affected the food situation in Darfur earlier in the year. This outcome aligns with previous GAO findings that lack of sufficient, timely donations contributed to food aid shortfalls in other emergency situations. Impact on nonfood assistance. The delay of U.S. nonfood humanitarian assistance, as well as a reduction in funding from other international donors, led NGO and UN officials to anticipate a negative impact on nonfood humanitarian operations in Darfur. In February 2006, these officials told us that the initial U.S. funding for the year had been less than planned for and needed to ensure continued levels of assistance to Darfur’s affected residents and IDPs. As a result of the funding delays, the NGO officials said, their organizations would be forced to make cuts in the services and programs they provided or to reduce their humanitarian aid staff in Darfur. For example, one NGO official indicated that the reduction in funding had forced the organization to downsize its health program and to transfer responsibility for the clinics to the Sudanese government. Several NGO and UN officials also indicated that without additional funds, key indicators such as the malnutrition and mortality rates, which had improved in 2005, would likely rise again above emergency levels. USAID officials told us in October 2006 that after receiving the supplemental funding, USAID’s partners had been able to restore humanitarian programs in Darfur to their previous levels and coverage. USAID’s ability to provide oversight and measure the impact of U.S. humanitarian assistance in Darfur has been limited by reductions in its staff who could directly monitor U.S. assistance or ensure that implementing partners fulfilled reporting requirements. From April 2004 to July 2006, as NGO and UN humanitarian staff in Darfur significantly increased—from 202 to 13,500—USAID’s staff in Darfur decreased. During the first 2 years of the conflict, USAID staff ranged between 10 and 20 personnel; within the last 9 months, that number has been reduced to 6 to 8 USAID personnel. USAID officials believe that the remaining number of USAID personnel is adequate to oversee the implementation of U.S. humanitarian assistance and USAID grant agreements, among other responsibilities. USAID officials indicated that other, external factors, such as UN and U.S. Embassy security requirements and restrictions imposed by the government of Sudan, limit the number of staff in Darfur. In addition, USAID officials indicated that they visited camps and communicated with NGO and UN agency officials regularly to discuss operations and difficulties and to assist in delivering humanitarian assistance. However, USAID officials told us that owing to limited time and staff in Darfur, security restrictions throughout the region, the size of Darfur, and the scale of U.S. assistance provided, they could not monitor compliance with all of the grant agreement indicators at locations in Darfur that were targeted for assistance. Furthermore, required NGO reporting has been incomplete. As a result, USAID lacks information to evaluate NGO operations, monitor their performance, and measure the impact of the assistance provided. According to USAID’s Office of U.S. Foreign Disaster Assistance Guidelines for Proposals and Reporting, NGOs must submit proposals outlining the indicators and outcomes expected from the humanitarian activities and services provided for with U.S. funds. Each grant agreement also specifies that 90 days after the agreement’s expiration, the NGO must submit a final report that includes the cumulative achievements and a comparison of actual accomplishments against the goals, objectives, indicators, and targets established for the agreement. Examples of indicators used by NGOs in proposals include, for example, the crude mortality rate in the target population or the number of latrines constructed. However, we found that 6 of 15 final reports that NGOs were required to submit by June 1, 2006, had not been submitted to USAID. Moreover, most of the reports that NGOs submitted did not include all required information. USAID’s Darfur Program Manager stated that because officials maintain constant communication with NGOs and conduct evaluations of activities in Darfur, the agency is aware of implementing partners’ accomplishments, or lack thereof, in Darfur, despite the incompleteness of most NGO reports. However, the reports and indicators are essential in monitoring and evaluating humanitarian operations, given that USAID staff are often constrained by limited access due to insecurity and violence throughout Darfur. In response to our observations USAID acknowledged the importance of obtaining required reports and has taken efforts to ensure reporting compliance from its NGO partners. As a result, USAID reported that in July 2006 it received all quarterly reports from current NGO partners. Challenges in accurately counting the populations of IDP camps have made it difficult for NGOs and UN agencies to ensure that all U.S. humanitarian assistance was provided to the intended recipients. In addition, some IDPs used humanitarian assistance for purposes other than those for which it was intended. In part because the IDP camps’ large size makes it difficult to control who receives assistance, some assistance has been distributed to recipients other than those targeted. For example, UN humanitarian profiles show that between December 2004 and October 2005, IDPs in Kalma camp, the largest camp in Darfur, were estimated at between 103,000 and 163,000. The World Food Program distributed food aid for IDPs based on these estimates. Prior to October 2005, several efforts to count the actual number of IDPs in Kalma camp were determined to be invalid because of problems with the counts and an inability to stop non- IDPs from participating. An October 2005 count was completed by more than 400 staff from six NGOs, with help from USAID staff, and with assistance from Sudanese government troops—who surrounded the camp to stop non-IDPs from entering—and AMIS civilian police, who provided security inside the camp. On October 4, 2005, a count of 87,000 was declared accurate, approximately 70,000 IDPs less than the previous estimate. According to a USAID official, residents from the nearby state capital of Nyala had previously received improper food distributions at the camp. According to USAID, without accurate counts of camp populations, the humanitarian community struggles to distribute food aid appropriately to the populations with the greatest need. Not all resources and assistance are being used as intended, although USAID and NGO officials indicated that this is typical of any emergency situation, especially one of this size and duration. For example, in Abu Shouk camp, we observed IDPs using treated drinking water to make bricks, either for their own shelters or for sale on the market. According to a UN official, IDPs in the camp used approximately 30 percent of available water in the camp to make bricks and, as a result, 8 of the 30 water pumps in Abu Shouk dried up. Although the African Union’s peace support operation has reportedly contributed to a reduction of large-scale violence in Darfur, AMIS’s actions to fulfill its mandate in Darfur have been taken in an incomplete or inconsistent manner. To monitor compliance with the cease-fire agreement, the first component of its mandate, AMIS military observers in Darfur have actively investigated alleged cease-fire agreement violations. However, the resulting reports have not been reviewed according to established procedure or widely publicized to identify parties who have violated the agreement. To build confidence and to improve security, the second and third components of its mandate, AMIS troops have taken actions such as conducting patrols and escorting IDP women who leave camps to forage for firewood. In addition, AMIS troops have intervened to stop impending violence against civilians and provided escorts for NGO convoys in some instances, although AMIS has not intervened in other instances. Further, the AMIS civilian police are working with Sudanese police to improve law enforcement, but the civilian police have encountered difficulties with the Sudanese authorities. To support AMIS’s efforts to meet its mandate, the U.S. government provided about $280 million from June 2004 through September 2006, according to State, primarily to build and maintain the 32 camps that house AMIS forces throughout Darfur. AMIS is viewed by many as having made an important contribution in Darfur. U.S. and other officials cite AMIS as responsible for decreasing large-scale violence simply by the deterrent effect of its presence in the region. State officials have emphasized that AMIS participants have a strong desire to be effective and make the AMIS initiative work and that the presence of AMIS’s patrols has had a positive impact. Further, a senior UN official told us that AMIS “jumped into Darfur” with few resources in a genuine attempt to “put out this fire” and that AMIS’s presence has had a notable impact. Further, State and UN officials noted that AMIS forces were deployed to Darfur quickly in comparison with other international peacekeeping missions. AMIS has taken a number of positive actions in Darfur in response to its mandate to (1) monitor compliance with the cease-fire agreement, (2) assist in confidence building, and (3) contribute to improving security. However, some of these actions have been executed in an incomplete or inconsistent manner, limiting the extent to which AMIS has been able to fulfill its mandate. To address the first component of its mandate, AMIS military observers in Darfur investigated and identified a number of violations of the 2004 cease- fire agreement. However, the Joint Commission has not consistently reviewed the resulting Ceasefire Commission investigation reports. Further, the publicly available record of recent cease-fire violation investigations is incomplete, making it impossible to establish how many total cease-fire violations have been identified by the Ceasefire Commission since its creation in 2004 and which parties have been responsible for recent cease-fire agreement violations. Ceasefire Commission reports provide specific information regarding violations. The commission found that all three parties to the conflict had committed violations, many of which occurred in South Darfur. Of the 80 allegations of cease-fire agreement violations that we reviewed, the Ceasefire Commission was unable to make a determination in 30 instances, often because an outside party (such as the Janjaweed) had allegedly committed the violation. These cases involved acts such as the killing of numerous civilians at a time and attacks on villages. In several cases, the Sudanese government was accused of fighting alongside the Janjaweed. In three of the cases we reviewed, the Ceasefire Commission determined that no violation had occurred. For the remaining 47 allegations of cease-fire agreement violations, the Ceasefire Commission found 54 violations. Sudanese government. The commission found that the Sudanese government had committed 27 cease-fire agreement violations. Among these violations, 9 involved civilian deaths; 10 involved village attacks; 7 involved attacks, harassment, or intimidation of civilians; and 7 involved Sudanese troop movements into new territory without proper notification to the Ceasefire Commission. SLM/A. The commission found that the SLM/A had committed 25 cease- fire agreement violations. Among these violations, six involved attacks on Sudanese facilities (e.g., military camps, police stations, convoys); seven involved abductions of civilians, local political representatives, or Sudanese government personnel; two involved village attacks; and two involved civilian deaths. JEM. The commission found that the JEM had committed two cease-fire agreement violations, both of which involved attacks on Sudanese facilities. The Ceasefire Commission’s recommendations in the reports vary from general to specific. General recommendations include urging the parties to the conflict to adhere to the cease-fire agreement; reminding them that they are required to give the commission prior notice of any administrative troop movements; and requesting party leaders to educate their members about the provisions of the agreement. More specific recommendations include those recommending that the Sudanese government disarm, neutralize, or restrain the Janjaweed and that SLM/A stop looting, or return looted goods, and release those whom it had abducted. In reports issued after November 2004, the Ceasefire Commission frequently appealed to the Joint Commission to become more involved in various aspects of the monitoring process. However, although the reports provide detailed information regarding parties that violated the cease-fire agreement and the nature of the violations, African Union and U.S. officials told us that the Joint Commission had not met regularly, had been ineffective in reviewing reports, and had no means of forcing the violating parties to take action based on report results. Further, although the Joint Commission has condemned cease-fire violations by the parties to the conflict and asked all parties to end all attacks, a DOD official noted that officials at African Union headquarters were not pushing the Joint Commission to review or approve Ceasefire Commission reports. African Union and U.S. officials emphasized that because the reports are available on the African Union’s Web site and publicly identify violators of the cease-fire agreement, the reports pressure the parties to the conflict to improve compliance with the agreement. The officials viewed this transparency and resulting pressure as a central benefit of the reports. However, we found that the public record of investigated cease-fire violations is incomplete, making it impossible to establish the total number of alleged or confirmed violations and to identify all responsible parties. For example, we were unable to open 37 of the 116 Ceasefire Commission reports listed as available on the African Union’s Web site. Further, we were unable to locate any reports subsequent to September 2005 to validate other claims regarding violations. For example, no Ceasefire Commission reports are publicly available to substantiate or refute a January 2006 report, which was prepared by the Chairperson of the African Union Commission and submitted to the Peace and Security Council, stating that cease-fire violations had escalated since October 2005 and that some of the most serious violations had occurred since that time. To fulfill the second and third components of the mandate, AMIS forces have provided patrols and escorts for IDPs, NGOs, and U.S. contractor staff; intervened to prevent violence; and collaborated with Sudanese government police. However, in some instances, AMIS patrols and escorts have not been able to prevent attacks or to provide needed services; AMIS forces have not intervened consistently to prevent violence; and AMIS civilian police have had difficult relations with the Sudanese police. AMIS Has Provided Patrols and Escorts but Has Not Prevented All Attacks or Provided All Needed Protection To build confidence among affected residents and IDPs and create a more secure environment, AMIS troops have taken actions such as conducting patrols and providing escorts for vulnerable groups. However, AMIS escorts and the escorted groups have sometimes encountered violent attacks, and AMIS has had insufficient resources to provide all needed escort services. Patrols. AMIS officials at several AMIS camps we visited told us that AMIS military observers or civilian police try to conduct about two patrols each day, for example, to make AMIS’s presence known and to interact positively with local communities, collect information, or investigate an alleged cease-fire agreement violation. We accompanied one confidence-building patrol near the North Darfur town of Kabkabiya; AMIS military observers interviewed local residents and a community leader to identify any problems that required AMIS attention. (See fig. 15.) Escorts. To further build confidence and improve security, AMIS troops have also provided escorts for groups of women foraging for firewood outside IDP camps. According to African Union and U.S. officials, the presence of AMIS troops has prevented these groups from being attacked. We accompanied an AMIS escort of a group of women as they walked more than 9 miles outside the town of Kass in South Darfur to find firewood for the next several days. Escorted by AMIS protection force troops and civilian police, as well as Sudanese government police, the 79 women went about their activities freely and without incident (see fig. 16). AMIS officials also told us that they have escorted NGO convoys to prevent theft and banditry. However, in several instances, AMIS troops or those being escorted have been threatened or killed. For example, several people were killed in rebel attacks on convoys, including four Nigerian soldiers and two local contractor staff in October 2005. In July 2006, 32 AMIS soldiers escorting a fuel convoy in North Darfur were abducted by one SLM/A faction; although the soldiers were eventually released, two fuel tanker drivers, the fuel tankers, and four AMIS vehicles were not released at that time. According to a senior U.S. contracting official working in Darfur, the drivers and tankers were released in October 2006, and the vehicles have not been returned. According to a December 2005 African Union-led assessment of AMIS (with participation from the UN, European Union, and United States), such incidents “undermine the Mission’s credibility in the eyes of civilians and embolden those who may target AMIS.” Further, a UN official emphasized that AMIS’s ability to provide services such as firewood escorts is limited and that AMIS cannot begin to cover all instances where such escorts would be useful. AMIS Has Intervened to Prevent Violence in Some Cases but Not in Others AMIS troops have also intervened to protect civilians under imminent threat of violence, as directed by the African Union mandate. For example, according to the December 2005 assessment of AMIS, AMIS troops were deployed to Zalingei in West Darfur to prevent retaliation against IDPs when there was heightened tension following the kidnapping of civilians by the SLM/A. Another AMIS deployment to Muhajariya halted a Sudanese military advance on the town that could have resulted in the substantial displacement of IDPs. In addition, following attacks on the town of Labado in South Darfur in late 2004, a deployment of AMIS troops in January 2005 deterred further attacks and led to the return of many town residents, who began to repair their homes and rebuild their lives. However, in other instances, AMIS has not intervened to prevent violence. For example, according to UN and U.S. documentation, AMIS did not maintain a regular presence around Mershing and its surroundings in South Darfur despite concerns about security in the area and repeated requests from the international community for a continuous AMIS presence. Ultimately, armed militia attacks resulted in the death of several IDPs and subsequent displacement in early 2006. In addition, an NGO official told us that AMIS was slow in responding to requests for assistance from NGOs caught in a battle between SLM/A and Sudanese government forces in the Jebel Marra area; however, AMIS did help evacuate NGO staff from the area 24 hours after the conflict began. According to an AMIS commander, although AMIS has taken preemptive action to stop attacks or skirmishes, the territory is too large for AMIS to be able prevent such violence overall. AMIS Has Collaborated with Sudanese Police, but Relations Have Been Difficult AMIS has worked with the Sudanese police to improve security, but some of its relations with the Sudanese police have been problematic. AMIS civilian police officers reported to us that they were working to ensure that the Sudanese police are acting on cases provided by the AMIS civilian police. AMIS civilian police also noted that, where appropriate, they have encouraged the use of village councils to resolve disputes, rather than referring every case to the Sudanese authorities. However, some AMIS civilian police officers reported that relations with the Sudanese police had at times been difficult. AMIS civilian police officers told us that Sudanese police had been slow to act on cases provided by AMIS, that these cases often do not result in convictions or adequate punishment, and that it can be difficult to obtain information from the Sudanese police regarding the status of referred cases. In addition, AMIS civilian police mentioned that Sudanese police have at times perpetrated violent acts against citizens of Darfur and AMIS police. Moreover, the civilian police have had difficulty gaining access to some areas that are controlled by rebel groups and lack an official Sudanese police presence. Further, the limited and misunderstood role of AMIS’s civilian police frustrated IDPs and NGO staff, who expressed the view that these police provided limited useful services. IDPs and NGOs told us that they did not understand why civilian police did not get involved when problems arose. Their frustration was heightened by the fact that civilian police have visible stations adjacent to IDP camps. AMIS and UN officials also noted that because the civilian police are unarmed, they require AMIS protection force escorts, which are not always available. The United States has supported AMIS primarily by funding the construction and maintenance of AMIS camps in Darfur by a contractor, PAE Government Services, Inc. (PAE). Other international donors have provided funding or goods and services to support AMIS’s peacekeeping operations. To support AMIS’s efforts to carry out its mandate, the U.S. government expended about $240 million from June 2004 to August 2006 and obligated another $40 million in September of 2006, primarily to build and maintain the 32 camps that house AMIS forces throughout Darfur, according to a State official who tracks this funding. African Union and U.S. officials told us that camp sites were chosen to be near population centers and known conflict areas. State contracted with PAE to build and maintain the camps as well as to maintain AMIS vehicles and communication equipment (see fig. 17). PAE is also maintaining armored personnel carriers provided by Canada; the Canadian government has provided State with more than $20 million for fiscal year 2006 for this purpose. Finally, PAE is responsible for hiring, housing, and compensating U.S. military observers (referred to by State officials in Darfur as “U.S. representatives” owing to their range of contributions to AMIS beyond observing activities). Although 16 U.S. military observers are authorized, only 11 were on the ground in Darfur during our February 2006 visit. Construction of the 32 camps, between June 2004 and December 2005, involved a number of challenges. According to a senior PAE official in Darfur, key costs associated with building the camps included supplying generators and, particularly as construction was beginning, transporting supplies and equipment via aircraft. Construction of the camps, which together can house 9,300 people, was complicated by the difficulty of finding international staff willing to come to Darfur and local staff possessing adequate skills. In addition, the remote locations of camp sites, combined with the inadequate condition of roads throughout the region, made it difficult to transport building supplies to the sites; PAE officials told us that in some cases, supplies were transported across insecure areas via donkeys. Further, the various augmentations of AMIS over time (including the introduction of the civilian police component) had to be incorporated into preexisting building plans. Moreover, the relatively small size of some of the land parcels provided by the government of Sudan made it difficult for PAE to, for example, construct sufficient perimeter protection around camps. Other sites provided by the government are in vulnerable locations; for instance, PAE officials identified one camp that was built in a natural “bowl,” making protection problematic, although steps were recently taken to relocate portions of this camp. According to PAE and State officials, PAE’s current costs for maintaining the camps, as well as AMIS communications equipment and vehicles, are about $7.8 million per month. PAE faces additional challenges in maintaining AMIS facilities, with the provision of water a key difficulty. According to a PAE situation report dated May 5, 2006, there are significant concerns regarding the provision of an uninterrupted supply of water to several AMIS camps. In some cases, unprotected water bore holes have been sabotaged. In the past, PAE also experienced the theft of jet fuel. A PAE official noted that other environmental challenges to maintaining the camps include heat, ultraviolet rays, and sand. The European Union, also a key AMIS donor, has provided about $200 million as direct budget support for AMIS operational costs such as per diem and food, according to a State official. Many other donor contributions have been “in kind”—that is, goods and services rather than direct funding. For example, the Canadian government loaned AMIS 25 helicopters and 105 armored personnel carriers; the British government provided vehicles and ground fuel; the Dutch government provided communications equipment; and the Norwegian government is building civilian police stations near IDP camps. Further, since October 2004, the UN has provided assistance to AMIS via a technical assistance cell working in Addis Ababa and funded by the UN Mission in Sudan. According to an official in the cell, it has provided services such as technical support (including an August 2005 UN-led exercise to prepare AMIS for troop deployments and identify areas where capacity building was required) and training (such as arranging training for military observers and bringing a financial officer to African Union headquarters for 3 months to assist with financial management). NATO has also provided training for AMIS personnel and has assisted with troop rotation efforts. Numerous factors have been identified by AMIS and U.S. government officials, among others, as contributing to AMIS’s difficulties in meeting its mandate. These factors include inadequacies in management, organization, and capacity; a relatively small force; resources that have been constrained or inefficiently allocated; and a lack of information regarding, and cooperation from, parties to the conflict. As AMIS has faced operational and other challenges, the UN has approved a UN peacekeeping operation in Darfur when AMIS’s mandate expires; however, as of October 2006, the Sudanese government had rejected the proposal. In June 2006, following a NATO offer, the African Union formally requested assistance from NATO in, among other things, identifying lessons learned from AMIS operations; however, according to a State official, African Union headquarters had taken no further action to pursue this review as of August 2006. Meanwhile, instability and violence have continued in Darfur. AMIS has reportedly experienced numerous difficulties in its management, organization, and capacity that have limited its ability to carry out its mandate. Regarding AMIS management, U.S., UN, and other sources have commonly expressed the view that AMIS’s command and control has been inadequate and confused. A UN-led assessment of AMIS in August 2005 stated, “The evolution of the mission has been such that it has depended on individual components conducting their own planning rather than tackling problems from a mission perspective. This has led to considerable disparity between components, duplication of effort, and the potential for planning at cross purposes.” A State official emphasized that AMIS has had no clear lines of authority between Addis Ababa, El Fasher, and the field and that a lack of coordination has made a rapid response to crisis situations problematic. A Refugees International study reported that “AMIS has suffered from language and cultural barriers between officers from various countries, confusion in procedures, limited future planning, and ineffective communications systems. Much of this stems from lack of peacekeeping experience.” The Brookings Institution–University of Bern study also stated that AMIS command and control had been slow and cumbersome and that “he unwieldy bureaucracy at African Union headquarters hampered all aspects of deployment; there is no institutional expertise for peace operations yet in the .” Moreover, AMIS leadership has demonstrated inconsistency in interpreting the AMIS mandate, creating confusion among AMIS troops and civilians and limiting its protection of civilians within its capabilities. AMIS leadership’s willingness to take certain actions to meet the mandate—for example, to protect civilians—has varied throughout Darfur, as already noted. State officials have observed that AMIS’s willingness to actively protect Darfur residents to the extent provided for in the mandate has been “uneven.” A U.S. official we met with in Sudan noted that in some cases, the degree to which AMIS’s mandate was robustly interpreted seemed to depend on leadership personalities. According to the December 2005 African Union-led assessment of AMIS, “military and police mission components are not operating in a sufficiently joint and coordinated manner.” The Brookings Institution--University of Bern study noted a similar problem, stating that the civilian police “rely on the AMIS protection force for their movements, but they are not currently integrated into military planning structures.” Many parties, including U.S. and UN officials, have called for the creation of a joint operations center that would serve as the focal point for the coordination and integration of AMIS military and civilian police operations; however, such a unit has not yet been created. On the other hand, a joint logistics operations center has been established to improve the logistical coordination of the AMIS components. African Union, U.S., and other sources have identified problems with the capacity and experience of African Union and AMIS as a key factor negatively affecting AMIS performance. According to the Brookings Institution–University of Bern study, “For many commanders, this African Union mission is their first operational experience.” Troops are also viewed as having limited experience. For example, according to a Human Rights Watch report, “ troop–contributing countries have sometimes struggled to identify and deploy properly trained staff officers, particularly those with appropriate language skills…. Most troop- contributing countries have previously contributed to UN missions that were often western-led operations, thus leaving the troops with limited operational experience above the tactical level.” An African Union official and a U.S. official noted separately that, although AMIS has training standards, little is done to verify that AMIS troops arriving in Darfur have received appropriate training. Further, according to the Brookings Institution–University of Bern study, the quality of AMIS police is not adequate, with limited screening prior to deployment to Darfur. The AMIS force, with its 7,271 personnel, has been characterized as a relatively small contingent that cannot effectively monitor and patrol all of Darfur, an area almost the size of France with a punishing environment (however, some regions in Darfur, such as the far north, are largely unpopulated ). According to State officials, the small size of the force has limited AMIS’s ability to patrol such a large, difficult region and sufficiently interact with residents and other parties in Darfur. Further, according to a Refugees International report, “AMIS doesn’t have enough troops to sufficiently protect itself, let alone protect displaced civilians and humanitarian organizations.” In addition, an International Crisis Group document stated in July 2005 that as many as 15,000 troops were needed in Darfur to protect villages and IDPs, provide security for humanitarian operations, and neutralize militias. The December 2005 African Union-Led Joint Assessment of AMIS reported that the absence of an authorized battalion had a significant operational impact and overstretched existing personnel. African Union and other parties have stated that AMIS does not have sufficient resources, including equipment and translators, to conduct the activities necessary to fulfill its mandate. A senior African Union official told us that AMIS’s reliance on outside donors has resulted in a lack of control for the mission because basic operational elements, such as facilities, logistics, and funding rest in the hands of other parties. According to January 2006 African Union documentation, the African Union has not been able to provide critical resources, such as vehicles and communications equipment, in a timely fashion; as a result, AMIS has functioned with about half of the needed logistical capacity. U.S. officials have countered that the African Union has at times been slow to respond to offers of assistance or to prioritize resource needs. During some periods, donor support for AMIS has been less than what the African Union had expected, with African Union documentation stating that a lack of funds has been a major constraint. According to African Union officials, a lack of resources such as vehicles and long-range communications equipment has complicated AMIS operations. For example, one AMIS commander told us that AMIS has inadequate transportation equipment and communications equipment, as well as a lack of night vision equipment. AMIS officials whom we interviewed expressed their concern that the lack of adequate communications equipment limited their ability to interact with different camps in the region. Further, an AMIS civilian police official noted that the civilian police often receive less equipment than the military component of AMIS, which has resulted in situations such as the need to rely on military colleagues’ equipment to communicate with their civilian police colleagues. One AMIS commander also noted that AMIS required more printers, computers, and photocopiers. However, a DOD official noted that until AMIS makes the most efficient use of its current resources, such as vehicles and communications equipment, it is unclear whether more resources are needed. Further, the December 2005 African Union–led report on AMIS notes that, where civilian police matters are concerned, equipment is both insufficient and incorrectly distributed. A lack of translators who can facilitate discussions between AMIS and the residents of Darfur has also been repeatedly cited as a central problem hindering AMIS’s ability to monitor compliance with the cease-fire agreement or build confidence. According to an official from the African Union’s Darfur Integrated Task Force, AMIS needs about 200 interpreters; however, as of February 2006, AMIS had only about 70 interpreters. The lack of interpreters has been attributed to the difficulty in finding people who speak both Arabic and English. One U.S. military observer told us that many uneducated people in Darfur speak only their tribal language, further complicating AMIS’s ability to ensure effective communication. In addition, we were told that at times, AMIS patrols used representatives of the parties to the conflict as translators, which meant that AMIS officials could not verify that translators were conducting the interview in an objective fashion, asking the required questions, or reporting responses accurately. In one example provided by an AMIS civilian police official in El Daein in South Darfur, an SLM/A translator stated that a woman had said she was “helped” in a particular instance, when in fact she had stated that she had been violently attacked. Someone within the investigative team was able to discern that this mistake had been made and communicate it to the rest of the team. IDPs also voiced frustration over the lack of civilian police translators able to communicate with IDPs and respond to IDPs reporting violence in the camps. Several analyses of AMIS have commented on its lack of capacity to collect needed intelligence regarding the situation in Darfur. The International Crisis Group has noted that “AMIS does not have an intelligence apparatus or collection capacity and does not actively analyze or disseminate intelligence.” The Brookings Institution–University of Bern study further stressed that “ood intelligence is vital in Darfur, yet AMIS’s capacity to gather, analyze and act on information has been very weak.” According to a former U.S. military observer to AMIS, “The African Union does not understand the importance of having an ‘intelligence cell’ and of having good information on the command structure, for example, of the Janjaweed.” The December 2005 African Union-led assessment of AMIS emphasized, “If AMIS operations are to be effective, the use of intelligence is essential,” and further noted that the lack of intelligence collection, analysis, and dissemination seriously reduces the effectiveness and focus of operations. The effectiveness of AMIS is directly related to the level of cooperation it receives from the parties to the conflict. Thus far, that cooperation has been extremely inconsistent. The government continues to create bureaucratic obstacles to AMIS’s ability to operate freely. These include curfews, early airport closings, and long delays in issuing permits and visas. AMIS has not, as they should have, protested against these restrictions on movements, notably the curfew. The government’s use of white vehicles and aircraft (which resemble AMIS) in military operations is also inconsistent with its commitments to support the Mission…. The and JEM bear an equal responsibility for accepting and supporting the presence of AMIS. Ongoing obstruction of activities by the rebels has included obstruction of movement, threatening patrols, harassment, theft of equipment, and even abduction of personnel. U.S. and UN officials emphasized an instance where the government of Sudan detained the 105 Canadian armored personnel carriers at the border and released them only after intense external pressure. A U.S. embassy official in Addis Ababa, Ethiopia, noted that Dutch communications equipment had been in Khartoum customs for months, demonstrating how the Sudanese government can obstruct, rather than facilitate, AMIS operations. In addition, all parties to the conflict—the Sudanese government, the SLM/A, and the JEM—have been cited several times for violating the 2004 cease-fire agreement. Representatives of these parties to Ceasefire Commission investigations, particularly the Sudanese government, routinely file objections to final report conclusions. According to an International Crisis Group report, “AMIS was born out of the N’djamena agreement , which lacked a true enforcement mechanism and was based on the assumption of compliance and goodwill by the parties. International pressure on those parties to respect their commitments has been ineffective, thus undermining the mission.” While AMIS has faced challenges in Darfur, the UN and NATO have offered to assist the African Union in, respectively, supplying a peacekeeping force when AMIS’s mandate expires at the end of December 2006 and identifying lessons learned from AMIS operations. The U.S. government and other parties have supported the proposed transition of AMIS responsibilities to a UN peacekeeping operation. In January 2006, the African Union’s Peace and Security Council officially declared its approval, in principle, for the transition of AMIS to a UN operation. In March, the council reaffirmed this position, and in May it declared that “concrete steps should be taken to effect the transition from AMIS to a UN peacekeeping operation.” The UN Security Council subsequently adopted a resolution endorsing this African Union decision to transition AMIS to a UN peacekeeping operation and emphasizing that a UN operation would have, to the extent possible, a strong African participation and character. In August 2006, the UN Security Council adopted a resolution expanding UNMIS’s mandate and calling for an UNMIS deployment to Darfur. According to a State official, a UN operation would be expected to build on AMIS efforts. Some portion of troops already participating in AMIS would be “bluehatted”—that is, could transition to UNMIS. According to a State official, under this scenario, the UN mission would have a unified command for the entire operation, with separate commanders for UNMIS efforts in southern Sudan and Darfur. According to the Department of State fiscal year 2007 budget request and a State official, this UN effort in Sudan would cost the U.S. government about $442 million in fiscal year 2007; a State official roughly estimated that the Darfur portion of this operation would cost the United States between $160 million and $180 million for the year. As of October 2006, the Sudanese government had refused a transition to a UN force in Darfur. However, in October the Sudanese president expressed support for a September offer by the UN Secretary-General to provide assistance to AMIS. The UN assistance package consists of equipment and personnel dedicated to supporting AMIS in the following ways: logistical and material support, military staff support, advisory support to civilian police, and other staff support in the areas of assistance in implementing the Darfur Peace Agreement, public information, mine action, and humanitarian coordination. In addition, in June 2006, following an offer by NATO, the Chairperson of the African Union Commission requested that NATO provide, among other things, assistance in reviewing AMIS operations in Darfur to identify “lessons learned,” which could help the African Union better execute any future peace support efforts. However, a State official reported that, although the Chairperson of the African Union Commission formally accepted NATO’s offer of this assistance, as of August 2006, the African Union headquarters had taken no further action to pursue the review. Such reviews are typically conducted after peacekeeping operations are completed; for example, the UN Department of Peacekeeping Operation’s Best Practices Section undertakes such reviews following UN peacekeeping efforts. Meanwhile, instability and violence continued in Darfur, furthering calls for UN involvement. According to a report prepared by the Chairperson of the African Union Commission, as of May 2006, “the region has continued to witness persistent insecurity, with ceasefire violations, banditry activities, hijacking of vehicles, attacks on villages and killing of unarmed civilians by the various parties, particularly the janjaweed.” One NGO reported 200 sexual assaults around Kalma camp in South Darfur within 5 weeks during the summer of 2006 and the African Union reported that two AMIS soldiers were killed in mid-August. In August 2006, the environment in Darfur remained insecure, with attacks and displacement continuing and, during some periods, worsening over time. State has noted that the Sudanese government offensive that began in August 2006 against parties that did not sign the Darfur Peace Agreement has directly impacted the ability of AMIS to conduct operations, the African Union’s ability to implement the agreement, and the delivery of humanitarian aid. A senior State official reported that “Darfur is on the verge of a dangerous downward spiral. The parties are rearming and repositioning to renew their fighting.” The level of acceptance of the peace agreement overall in Darfur is uncertain, owing to a general lack of information throughout the population regarding the terms of the agreement as well as concern over the fact that the smaller SLM/A faction and the JEM declined to sign the deal. UN officials have warned that continued militia attacks on IDPs are affecting implementation of the peace agreement and emphasized that successful implementation of the agreement is key to peace in Darfur, in the Sudan, and in the wider region. In September 2006, an African Union Peace and Security Council communiqué noted that “the security situation remains volatile and continues to deteriorate even further in some parts of Darfur, consequently worsening the humanitarian and human rights situation, and the current build-up of forces by all the parties poses further risks and challenges to the peace efforts.” On September 19, 2006, the U.S. President named former USAID Administrator Andrew Natsios as a Presidential Special Envoy to lead U.S. efforts to resolve outstanding disputes in Darfur. As the primary donor of humanitarian assistance for Darfur, the United States has provided essential aid for the people of Darfur and improved the health and livelihood of IDPs and affected residents. Without U.S. assistance, the humanitarian organizations responding to the crisis would likely have been incapable of providing coverage to many of the affected population. The U.S. contribution to building and maintaining all AMIS facilities has also been essential, along with other donor assistance, to AMIS’s ability to pursue its mandate. As insecurity continues in Darfur, such support may be required well into the future. At the same time, delayed humanitarian assistance has hindered NGO and UN operations, jeopardizing these USAID partners’ ability to provide services to affected and IDP communities needed to maintain improved levels of health. Further, continued resistance and lack of cooperation from the government of Sudan, as well as continued insecurity and conflict within Darfur, have made it nearly impossible for humanitarian organizations to provide consistent and complete coverage to the affected residents and IDPs throughout Darfur. Although USAID has taken steps to ensure more complete reporting, the limitations in its oversight of U.S. assistance have made it difficult to accurately determine the impact of U.S. humanitarian assistance. The fact that the violence in Darfur has not abated, and has even worsened in some instances, indicates the region’s need for continued assistance. Although AMIS is seen as having contributed, through its presence in Darfur, to decreasing large-scale violence, its fulfillment of its mandate has been limited by the incompleteness or inconsistency of some of its actions—such as efforts to protect civilians—in addition to numerous operational challenges. Some of these challenges—for example, AMIS’s small size, its resources constraints, and the lack of cooperation from the parties to the conflict—have remained beyond its control. However, other challenges, such as AMIS’s inadequate management, organization, and capacity, may stem from the African Union’s lack of experience with peace support efforts. At the same time, the ongoing and increasing violence in Darfur, as well as AMIS’s added responsibilities under the May 2006 peace agreement, make it likely that the challenges AMIS has faced will intensify. The proffered NATO assistance in reviewing AMIS operations---a typical “lessons learned” activity following a peacekeeping initiative—could provide a useful critical analysis of these challenges and their root causes. The resulting insights could assist the African Union in strengthening AMIS, if its mandate is renewed, as well as in planning and executing any future peace support efforts. Absent a stronger AMIS or intervention by another international party such as the UN, the conflict in Darfur could continue indefinitely to disrupt and destroy the lives of Darfurians. We recommend that the Secretary of State encourage the Chairperson of the African Union Commission to ensure that an appropriate body, such as NATO, provide assistance for an assessment of AMIS operations to identify the key challenges AMIS has faced and the reasons for those challenges. Such a “lessons learned” assessment would provide information necessary to allow (1) the African Union to strengthen its future peace support planning and operations and (2) the donor community to support future African Union peace support efforts in a manner that could minimize difficulties such as those encountered by AMIS. We provided a draft of this report to the Departments of State and Defense as well as USAID. We received written comments from the Department of State and USAID. The Department of State supported our recommendation and noted that the report presents a balanced and accurate picture of the situation in Darfur. The department also suggested that the report provide additional details or characterizations regarding certain, primarily AMIS, issues. For example, State suggested that the report should (1) emphasize the speed with which AMIS forces were deployed to Darfur and (2) note that the Sudanese government's offensive against parties that did not sign the Darfur Peace Agreement has directly impacted the ability of AMIS to conduct operations. We incorporated such information into the report as appropriate. See appendix V for a reproduction of State's letter and our response. USAID commented that in general, it found the report to be a comprehensive assessment of USAID’s involvement in Darfur but said that we should include additional information in our discussions of areas such as the number of USAID staff working in Darfur and the variety of efforts used by the agency to monitor grants. Specifically, USAID stated that our reference to reduced staff in Darfur was incomplete and felt that our discussion of incomplete reporting did not highlight other monitoring efforts, such as site visits and meetings with NGOs. We made adjustments as appropriate. See appendix VI for a reproduction of USAID’s letter and our response. DOD provided no comments on the draft report. As arranged with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of State, the Secretary of Defense, the Director of U.S. Foreign Assistance and USAID Administrator, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and major contributors are listed in appendix VII. This report examines (1) U.S. humanitarian assistance provided to help relieve the crisis in Darfur, (2) challenges that the U.S. Agency for International Development (USAID) and its implementing partners have encountered, (3) the African Union’s efforts to fulfill its peace support mandate in Darfur, and (4) factors affecting the implementation of this mandate. We collected data on international contributions (in dollar amounts) for Darfur provided by the UN Resource Tracking Service from September 2003 through June 2006. The amounts provided by the UN contain both amounts committed and amounts pledged for Darfur by international donors. We did not include pledges and commitments from international donors that support the refugees located in Chad, because we did not review U.S. obligations to refugees in Chad. We made this decision because (1) security restrictions and conflict in the area prevented us from observing U.S.-funded activities in Chad and (2) the support for refugees in Chad was small in comparison with assistance provided to Darfur. We determined that the data were sufficiently reliable for the purpose of broadly comparing the United States’ contributions with those of other international donors. We noted several limitations in the data, notably, that the data include verbal pledges that were self-reported to the UN Resource Tracking System by the donors. According to a UN official, the data may exceed other, similar UN data on donor contributions, because they include verbal pledges that have not been formally submitted to and verified by UN sources. Furthermore, we were unable to determine the reliability of financial records and the dollar amounts reportedly pledged by donors. To review U.S. funding of humanitarian assistance—our first objective—we collected and reviewed U.S. obligations data for assistance for Darfur from USAID’s Office of Foreign Disaster Assistance, Office of Transition Initiatives, and Office of Food for Peace, as well as The Department of State (State) Bureau of Population, Refugees, and Migration. To assess the reliability of these data, we interviewed State and USAID officials regarding their methods for managing and tracking the obligation data, and we compared these data with the amounts listed in State’s and USAID’s agreements with nongovernmental organizations (NGO) and UN agencies. According to a USAID official, expenditure data for the Office of Foreign Disaster Assistance are not tracked in the office’s reporting system, but the data are reconciled on a daily basis and include any amounts that may have been de-obligated. A USAID Food for Peace official also indicated that the office’s tracking system is also reconciled on a regular basis. Therefore, we concluded that the data we collected on obligations from each agency are sufficiently reliable for the purpose of reviewing U.S. humanitarian assistance for Darfur from October 1, 2003, through September 30, 2006. To review the activities and programs undertaken with U.S. humanitarian assistance, we reviewed USAID grant agreements. We interviewed USAID and State officials in Washington, D.C., as well as UN officials located in New York who were involved in humanitarian assistance for Darfur. In February 2006, we traveled to Khartoum and Darfur, Sudan, to examine the activities supported by U.S. humanitarian assistance. In Khartoum we met with U.S. implementing partners from NGOs and UN agencies, as well as an official from the government of Sudan’s Ministry of Foreign Affairs. In addition, we visited seven camps for internally displaced persons (IDP)— Abu Shouk, Al Salaam, El Serif, Kalma, Kass, Otash, and Zam Zam—located in North and South Darfur to observe activities and programs implemented with U.S. funds. We observed a variety of programs and activities supported by U.S. assistance, including food distribution, medical clinics, clean water and sanitation facilities, income-generation activities, provision of shelter materials, and nutritional feeding centers. We spoke with officials from the NGOs and UN agencies implementing these activities programs in Darfur. We also spoke with IDPs in the camps to obtain their perspectives on the provision of humanitarian assistance in the camps. Restrictions placed on our travel by the State Regional Security Officer in Khartoum because of security concerns limited the area in which we traveled and observed NGO and UN operations in Darfur. To examine the results of the humanitarian assistance activities, we reviewed the 15 final reports submitted by NGOs to USAID’s Office of Foreign Disaster Assistance. We reviewed the original NGO proposals to identify the indictors used to identify performance, and we also reviewed USAID guidance for reporting. We compared the indicators included in the original proposals to the reported indicators in each final report and identified the indicators that were absent from the final reports. We interviewed USAID officials to identify USAID’s efforts to monitor and evaluate NGO and UN activities in Darfur as well as efforts to motivate NGOs to submit final reports. We also reviewed the Office of Food for Peace performance review questionnaires submitted by implementing partners providing food aid for Darfur. In addition, we spoke with an official from the USAID Office of Transition Initiatives to discuss an ongoing program review. We also reviewed UN Humanitarian Profile reports that provide an overview of humanitarian assistance from April 2004 to July 2006. These reports were also used to identify the IDP and affected resident population in Darfur, by month. According to UN officials and the profiles, NGOs and UN agencies operating throughout Darfur submitted the information from the reports to the UN on a monthly basis until January 2006, and now submit it quarterly. Although the data contained in the reports are self-reported, UN officials indicated that they confirm data to the extent possible and update the data each month. Furthermore, the UN Humanitarian Profiles are the only source of information regarding the total number of IDPs and affected residents in Darfur and the number of IDPs receiving assistance in each sector. We determined that the population data and the data regarding the population receiving assistance were reliable for the purposes of presenting a general overview of assistance in Darfur. To determine the obstacles and challenges facing NGOs and UN agencies— our second objective—we reviewed UN and USAID reports and cables discussing humanitarian operations and problems in Darfur. We interviewed USAID, UN, and NGO officials in Darfur to discuss the challenges they face in implementing assistance program and activities in Darfur. We also met with officials from the Sudan government Humanitarian Assistance Committee to discuss the obstacles and concerns of NGOs and UN officials operating in Darfur and obtain the perspective and input of the Sudan government regarding these issues. In order to identify African Union Mission in Sudan (AMIS) efforts and the operational challenges AMIS has faced—our third and fourth objectives— as well as resources available to AMIS to pursue its mandate, we used numerous African Union sources. We reviewed African Union Peace and Security Council communiqués, as well as reports prepared by the Chairperson of the African Union Commission that were submitted to the council. We also reviewed African Union-led reviews of AMIS, conducted in March and December 2005, as well as a UN-led assessment of AMIS performance conducted in August of that year. In February 2006, we met with AMIS leadership (military and civilian police) at AMIS headquarters in El Fasher and the following AMIS group sites in North and South Darfur— Zam Zam, Kabkabiya, Sarif Umra, Um Kadada, Nyala, Kass and El Daein— where we discussed the AMIS mandate and AMIS activities at each location. We also discussed AMIS efforts with the U.S. representative to the African Union-led Ceasefire Commission, as well as U.S. representatives (military observers) in four locations. We were unable to travel to AMIS sites in West Darfur owing to security concerns. At African Union headquarters in Addis Ababa, Ethiopia, we met with senior African Union officials, including the Commissioner for the African Union’s Peace and Security Council and the head of the Darfur Integrated Task Force in February 2006. To assess reports prepared by the AMIS Ceasefire Commission, we analyzed the contents of all publicly available reports from the African Union’s Web site, www.africa- union.org/DARFUR/CFC.htm. We also discussed the African Union’s initiative in Darfur and external donor efforts with officials from the Departments of State (in Washington, D.C.; Khartoum and El Fasher, Sudan; and Addis Ababa, Ethiopia) and Defense (DOD). At State headquarters in Washington, we discussed the situation in Darfur and AMIS efforts with the following bureaus and offices: Administration, African Affairs; International Organization Affairs; Democracy, Human Rights, and Labor; Population, Refugees, and Migration; and War Crimes Issues. At DOD, we met with the staff from the Office of the Secretary of Defense. Further, we reviewed UN Security Council resolutions, UN reports that addressed the situation in Darfur, and a UN August 2005 report that assessed AMIS operations. We met with officials from the UN’s Department of Peacekeeping Operations in New York. We also met with European Union and UN officials at African Union headquarters in Addis Ababa. In addition, we met with officials from, and reviewed reports prepared by, expert and advocacy groups such as the International Crisis Group, Human Rights Watch, and Refugees International. Finally, we met with Sudanese government officials in Khartoum and Washington, D.C. To review the U.S. government’s support for AMIS, we discussed this support with officials from the African Union and Departments of State and Defense. To identify contractor activities, we reviewed the contract documentation defining the terms for tasks performed by PAE Government Services, Inc., (PAE) in Darfur. Further, we reviewed PAE weekly situation reports, describing events related to camp construction and maintenance and submitted to State, and met with officials from PAE in Washington, D.C., and North and South Darfur. We also discussed PAE’s efforts with an official who was working on contract in Darfur as State’s Contracting Officer’s Technical Representative. PAE and State officials accompanied us on our visit to AMIS camps, providing tours of each AMIS site, as well as to the logistics operating base in El Fasher and the forward operating base in Nyala, explaining the process for constructing and maintaining AMIS facilities. We determined that data obtained from PAE were sufficiently reliable for inclusion in our report. To identify the amount of U.S. funding that has been provided to construct and maintain AMIS camps, we spoke with State officials from the African Affairs and Administration bureaus. In particular, we had detailed discussions with a key official from the African Affairs Bureau who provided information on funding, provided by fiscal year, and the funding source. The official prepared a calculation of U.S. funding for PAE efforts independently. All figures addressing State funding to support AMIS provided in the report are attributed to this State official and were not independently verified. However, after discussions with multiple State officials knowledgeable about State support for PAE who cited the State official as a key source within State for this information, combined with a review of State’s information by PAE officials, we have determined that the funding information provided is sufficiently reliable for inclusion in our report with appropriate attribution. We discussed oversight regarding this funding with State officials in Darfur and Washington. We conducted our work from September 2005 to November 2006 in accordance with generally accepted government auditing standards. In early 2003, Darfur rebels attacked Sudanese police stations and the airport in El Fasher, the capital of North Darfur. In El Fasher, the rebels destroyed numerous military aircraft, killed numerous soldiers, and kidnapped a Sudanese general. In response, the government armed and supported local tribal and other militias (the Janjaweed). Fighting between the rebel groups and the Sudan military and other armed militia intensified during late 2003. The principal rebel groups are the Sudan Liberation Movement/Army (SLM/A) and the Justice and Equality Movement (JEM). In April 2004, there was limited humanitarian presence in Darfur, with only 202 humanitarian staff working in the region. In addition, some of the nongovernmental organizations (NGO) operating in Darfur provided only limited humanitarian assistance, since their primary focus was on development assistance. On December 21, 2004, Save the Children-UK announced that it was discontinuing humanitarian operations in Darfur following two incidents in October and December that resulted in the deaths of four staff members. Save the Children had operated in Darfur for 20 years. At the end of 2004, total pledges and commitments for Darfur from international donors in 2004 totaled more than $890 million. The United States committed or pledged more than $271 million (31 percent). The population of Darfur estimated to be affected by the violence, both internally displaced persons (IDP) and affected residents, rose to more than 3.2 million people, 1.9 million of whom are IDPs. The number of humanitarian aid workers in Darfur grew to a total of 13,715 workers from 13 UN agencies and 82 NGOs. Total pledges and commitments for Darfur in 2005, from all donors, totaled almost $675 million. The United States committed or pledged nearly $365 million (54 percent). Following an escalation of violence in the Jebel Mara area of West Darfur, on January 25, 2006, GOAL, an international NGO, evacuated all staff in the region and abandoned operations. During the evacuation of staff, a helicopter crash resulted in the death of one GOAL aid worker. On April 28, the UN WFP announced that shortages in funds would force WFP to begin reducing the daily food rations for the people of Darfur in May. WFP indicated that the reduced rations would extend limited food stocks during the “hunger season,” when needs are greatest. Owing to contributions by the U.S. and Sudanese governments, the rations were only cut to 84 percent of the daily requirement. As of June 2006, international pledges and commitments for Darfur in 2006 totaled almost $331 million. According to the UN, this amount was approximately $320 million less than the required funding for 2006. The United States committed almost $240 million (72 percent). On May 25, 2004, the African Union’s Peace and Security Council issued a communiqué stressing the need for the three parties to the conflict—the government of Sudan, the SLM/A, and the JEM—to implement the April 2004 Humanitarian Ceasefire Agreement. Further, the Peace and Security Council authorized the initial deployment of an African Union Observer Mission to support the work of the newly created Ceasefire Commission. On October 20, 2004, the African Union’s Peace and Security Council issued a communiqué that established an AMIS presence in Darfur of 3,320 personnel. These personnel were to include 2,341 military personnel, among them 450 observers, and up to 815 civilian police as well as appropriate civilian personnel. Further, AMIS was given a specific mandate to monitor and observe compliance with the ceasefire agreement, assist in the process of confidence building, and contribute to a secure environment for the delivery of humanitarian relief. This was the first time the council called for a civilian police presence. On April 28, 2005, the African Union’s Peace and Security Council issued a communiqué praising AMIS efforts and noting improvements where the mission was deployed in Darfur but concluding that the current force was overstretched. The communiqué increased AMIS’s strength to a total of 6,171 military personnel, with an appropriate civilian component, including up to 1,560 civilian police personnel, for a total force of at least 7,731. From August 2005 on, 35 AMIS personnel were abducted; 4 Nigerian protection force soldiers were killed; and vehicles, communications equipment, weapons, and ammunition were lost. According to an African Union assessment of AMIS, these attacks on AMIS undermined the mission’s credibility in the eyes of civilians and emboldened those who might target AMIS. In July 2006, 32 AMIS personnel were abducted. On March 10, 2006, the African Union’s Peace and Security Council confirmed its January 2006 expression of support for a transition of AMIS to a UN operation. The council requested that the African Union Commission vigorously pursue its efforts toward reaching, as quickly as possible, the authorized AMIS strength of 7,731. On April 30, 2006, AMIS deployment reached 7,271 (755 military observers, 5,086 protection force troops, and 1,430 civilian police). In addition, another 155 personnel were serving as air crew or interpreters or in other roles. Of total AMIS deployment, 312 were women. Protection force troops came from Rwanda, Nigeria, Senegal, Gambia, and South Africa. AMIS deployment was below the authorized level of about 7,731, primarily because an expected contingent of South African troops was never deployed. On September 20, the African Union Peace and Security Council extended the mandate of AMIS from September 30, 2006 to December 31, 2006. The initial cease-fire agreement between the parties to the conflict (the Sudanese government and SLM/A) and mediated by the government of Chad, is signed; the agreed collapsed by December of 2003. On April 8, 2004 the three parties to the conflict signed the “Agreement on Humanitarian Ceasefire on the Conflict in Darfur” in N’djamena, Chad. The parties agreed to, among other things, refrain from any act of violence or any other abuse on civilian populations. The parties further agreed to establish a cease-fire commission to, among other things, plan, verify, and ensure implementation of the cease-fire agreement provisions. On November 9, 2004, the three parties to the conflict signed two protocols in Abuja, Nigeria. (1) “Protocol Between the Government of the Sudan (GOS), the Sudan Liberation Movement/Army (SLM/A) and the Justice and Equality Movement (JEM) on the Improvement of the Humanitarian Situation in Darfur” commits the parties to, among other things, guarantee unimpeded and unrestricted access for humanitarian workers and assistance to reach all needy people throughout Darfur and take all steps required to prevent all attacks against civilians by any party or group, including the Janjaweed. The protocol also requests the UN to expand the number of human rights monitors in Darfur. (2) “Protocol Between the Government of the Sudan (GOS), the Sudan Liberation Movement/Army (SLM/A) and the Justice and Equality Movement (JEM) on the Enhancement of the Security Situation in Darfur in Accordance with the N’djamena Agreement” commits the parties to, among other things, recommit themselves to ensuring an effective cease-fire by refraining from all hostilities and military actions, submit to the cease-fire commission all information needed to carry out its mandate, and release all persons detained in relation to the hostilities in Darfur. The Sudanese government also agreed to implement its stated commitment to neutralize and disarm the Janjaweed. On July 5, 2005, the three parties to the conflict signed the “Declaration of Principles for the Resolution of the Sudanese Conflict in Darfur.” This declaration established 17 principles to guide future deliberations and constituted the basis for a settlement of the Darfur conflict. These principles address issues such as respect for the diversity of the Sudanese people, democracy, political pluralism, rule of law, independence of the judiciary, and freedom of the media; effective representation in all government institutions by the citizens of Sudan, including those from Darfur; equitable distribution of national wealth; provision of humanitarian assistance; return to places of origin for IDPs; rehabilitation/reconstruction of Darfur; and broad security arrangements. On May 5, 2006, the Sudanese government and the SLM/A faction with the largest military force signed the Darfur Peace Agreement. This agreement has provisions on power sharing (including the creation of the Senior Assistant to the President, the fourth-highest position in the Sudanese government, appointed by the President from a list of nominees provided by the rebel movements); wealth sharing (including the creation of a Darfur reconstruction and development fund that will receive $700 million in funds from the Sudanese government between 2006 and 2008); and security arrangements (including a requirement for verifiable disarmament of the Janjaweed militia by the Sudanese government). The smaller SLM/A faction and JEM did not sign the agreement. In 2003 and 2004, USAID/Office of Foreign Disaster Assistance (OFDA) deployed field staff to Sudan to assess the extent of the Darfur crisis. In April 2004, responding to the growing humanitarian emergency, USAID/OFDA mobilized a Disaster Assistance Response Team. USAID continued a phased deployment of humanitarian personnel as official access and improved security allowed for its increased presence in Darfur. Secretary Powell visited Sudan, the first U.S. Secretary of State to do so in 26 years. Powell met with Sudan’s President Omar Al-Bashir, emphasizing the need to dismantle the Janjaweed to restore security to the region and enable IDPs to return home. The government of Sudan agreed to this objective as well as to removing restrictions on humanitarian aid and participating in a political resolution of the Darfur crisis facilitated by the African Union. PAE, a U.S. company, via a contract with the U.S. Department of State and with assistance from another U.S. contractor, began building camps for AMIS troops in Darfur. PAE initially constructed five camps (in El Fasher, Nyala, El Geneina, Tine, and Kabkabiya) for AMIS troops. Significant challenges were identified in building these camps, such as transporting materials to building sites and providing water to AMIS personnel. PAE eventually built a total of 32 AMIS camps. On July 22, 2004, the U.S. House of Representatives and Senate unanimously passed separate resolutions [H.Con.Res. 467, 108th Cong. (2004); S.Con.Res. 133, 108th Cong. (2004)] declaring the crisis in Darfur to be genocide, based on articles of the Convention on the Prevention and Punishment of the Crime of Genocide of 1948. These resolutions declare that the government of Sudan has violated the Convention and call upon the member states of the United Nations to undertake measures to prevent genocide in Darfur from escalating further. The resolutions also commend the administration’s efforts in seeking a peaceful resolution to the conflict and in providing humanitarian assistance and urge it to continue to lead an international effort to stop the genocide in Darfur. On September 9, 2004, Powell testifies before Senate Foreign Relations Committee and declares atrocities in Darfur to be genocide, based on evidence collected by the Department of State. Further, he states that the government of Sudan and the Janjaweed are responsible and that the United States, as a contracting party to the Genocide Convention, will demand that the UN initiate a full investigation. President Bush made similar statements that day. On May 9, 2006, addressing the UN Security Council Ministerial on Sudan, Secretary of State Rice reaffirmed the administration's declaration that the violence in Darfur constitutes genocide. Additionally, Secretary Rice stated that the Darfur Peace Agreement is an opportunity to end the crisis in the region and allow people to return to their homes, emphasizing a role for UN troops to implement the peace agreement. Secretary Rice also stated that the United States had provided nearly all of the support that the WFP's mission in Darfur had received. On October 13, 2006, President Bush signed into law the Darfur Peace and Accountability Act of 2006 which imposes sanctions against persons responsible for genocide, war crimes, and crimes against humanity; supports measures for the protection of civilians and humanitarian operations; and supports peace efforts in Darfur. On December 5, 2003, the UN Under-Secretary-General in charge of the UN Office for the Coordination of Humanitarian Affairs stated, “The humanitarian situation in Darfur has quickly become one of the worst in the world.” On July 3, 2004, the government of Sudan and the UN signed a joint communiqué in which the Sudanese government pledged to remove obstacles to humanitarian assistance in Darfur and committed to disarming the Janjaweed and other armed outlaw groups. The UN Security Council called for the Sudanese government to fulfill its commitment to facilitate humanitarian relief in Darfur and remove restrictions that might hinder humanitarian aid to Darfur. In addition, the council called for the government to disarm the Janjaweed militias and bring perpetrators of human rights and international humanitarian law violations and other atrocities to justice. On January 25, 2005, the International Commission of Inquiry, established by the UN, issued a report stating that the government of Sudan has not pursued a policy of genocide. However, the commission reported that the Sudanese government and the Janjaweed, have committed international offences such as crimes against humanity and war crimes that may be no less serious and heinous than genocide. On March 24, 2005, the UN Security Council established the UN Mission in Sudan (UNMIS) after determining that the situation in Darfur continued to threaten international peace and security. UNMIS was mandated to support implementation of the Comprehensive Peace Agreement; to facilitate and coordinate the voluntary return of refugees and IDPs and humanitarian assistance; to contribute to international efforts to protect and promote human rights in Sudan; and to coordinate international efforts to protect civilians. The council also called on all Sudanese parties to take immediate steps to achieve a peaceful settlement to the Darfur conflict and take all necessary action to prevent further violations of human rights and international humanitarian law. On March 31, 2005, the UN Security Council referred the situation in Darfur to the Prosecutor of the International Criminal Court, taking note of the International Commission of Inquiry report on violations of international law and human rights in Darfur. On March 24, 2006, the UN Security Council called for preparatory planning for a transition of AMIS to a UN operation. The plan was to include options for reinforcing the Darfur peace effort through additional appropriate transitional assistance to AMIS, including assistance in logistics, mobility and communications. On August 31, 2006, the UN Security Council commended the efforts of the African Union for the successful deployment of AMIS but reaffirmed its concern that ongoing violence in Darfur might further negatively affect the rest of the Sudan as well as the region. The UN Security Council expanded UNMIS’s mandate and determined that UNMIS should deploy to Darfur. As of October 2006, the Sudanese government had refused a transition to a UN force in Darfur. The African Union mission in Sudan (AMIS) evolved as the African Union has authorized the incremental deployment of thousands of personnel to carry out its responsibilities in Darfur. In May 2004, after three parties signed the April 2004 humanitarian cease-fire agreement, the African Union’s Peace and Security Council authorized an observer mission to Darfur. This mission began operations in June 2004 with 60 military observers and 300 protection force soldiers as well as observers from the Sudanese parties. In July, the Peace and Security Council called for a comprehensive plan to enhance the effectiveness of the mission, including the possibility of transforming the mission into a full-fledged peacekeeping mission to ensure the effective implementation of the cease-fire agreement. In October 2004, in conjunction with the issuance of an African Union report that discussed the status of the mission and described the situation in Darfur, the council decided to enhance AMIS to a total of 3,320 personnel, including 2,341 military personnel (military observers and protection force troops), among them 450 observers; up to 815 civilian police personnel (the first time that a civilian police component was formally established); and appropriate civilian personnel. The African Union Peace and Security Council provided AMIS II with the following specific mandate for its peace support efforts: (1) to monitor and observe compliance with the 2004 humanitarian cease-fire agreement; (2) to assist in the process of confidence building; and (3) to contribute to a secure environment for the delivery of humanitarian relief and, beyond that, the return of IDPs and refugees to their homes, and to contribute to the improvement of the security situation throughout Darfur. In working to meet this mandate, the council decided that AMIS II would, among other tasks, “protect civilians whom it encounters under imminent threat and in the immediate vicinity, within resources and capability, it being understood that the protection of the civilian population is the responsibility of the government of Sudan.” In early 2005, the African Union decided to augment AMIS once again. In April 2005, the Peace and Security Council authorized increasing the size of AMIS to 6,171 military personnel, in addition to an appropriate civilian component, including up to 1,560 civilian police personnel (for a total of more than 7,700). This further expansion is referred to as AMIS II-E. These AMIS personnel operate throughout eight sectors in Darfur that have been established to help organize AMIS efforts. A Darfur Integrated Task Force was established at African Union headquarters in Addis Ababa, Ethiopia, to assist with planning, force generation, procurement and logistics, and administrative support and to interact with AMIS donors. The African Union did not call for an AMIS civilian police presence until AMIS operations were well under way. The civilian police component was added to AMIS in October 2004 to, according to a senior UN official, further the “rule of law” by working with Sudanese police. The European Union was a strong proponent of a civilian police component, and European Union officials told us that the civilian police gave European Union member states the opportunity to play a direct role in AMIS by providing police staff. Specifically, the role of the civilian police is, among other things, to establish and maintain contact with the Sudanese police, observe and report on Sudanese police service delivery, and monitor the security of IDPs. As of April 30, 2006, AMIS had 7,271 personnel in Darfur (755 military observers, 5,086 soldiers/protection force, and 1,430 civilian police). According to a UN official, AMIS deployed its troops much faster than the UN could have done (although UN efforts have higher standards regarding aspects of deployment such as required troop skills and equipment). The majority of AMIS soldiers have come from Rwanda and Nigeria, with additional troops from Senegal, Gambia, and South Africa. Military observers from more than 20 countries (numerous African countries and the United States, the European Union, and the three parties to the conflict) and civilian police are participating in AMIS. The total number of the AMIS force deployed in Darfur is far less than the authorized AMIS size of more than 7,700—according to African Union sources, primarily because expected South African troops were never fully deployed to Darfur. In January 2006, the African Union’s Peace and Security Council officially declared its approval, in principle, for the transition of AMIS to a UN operation. In March, the council reaffirmed this position, and in May it declared that “concrete steps should be taken to effect the transition from AMIS to a UN peacekeeping operation.” The UN Security Council subsequently adopted a resolution commending AMIS’s role in reducing large-scale, organized violence in Darfur; endorsing this African Union decision to transition AMIS to a UN peacekeeping operation; and stressing that a UN operation would have, to the extent possible, a strong African participation and character. In August 2006, the UN Security Council adopted a resolution expanding UNMIS’s mandate and calling for an UNMIS deployment to Darfur. The mandate of AMIS expires on December 31, 2006. Agency for Technical Cooperation and Development Community, Habitat, Finance Development Alternatives, Inc. Harvard School of Public Health International Committee of the Red Cross United Methodist Committee on Relief United Nations Development Program (UNDP) United Nations Department of Safety and Security United Nations Food and Agriculture Organization United Nations High Commissioner for Refugees United Nations Children's Fund United Nations Office for the Coordination of Humanitarian Affairs (Continued From Previous Page) Following are GAO’s comments on the Department of State’s letter dated October 2, 2006. 1. We have added the U.S. contribution of training and equipping Rwandan and Nigerian battalions through the African Contingency Operations Training and Assistance (ACOTA) program to footnote 65. 2. We have added State’s perspective regarding the quick deployment of AMIS troops, as well as a similar view expressed by a senior UN official working in Addis Ababa, Ethiopia. 3. The report’s discussion of rebel group control over humanitarian access reflects the views of UN and NGO officials. Further, the report cites banditry and looting, as well as more violent acts, such as attacks and the killing of humanitarian workers. 4. Owing to scope and time limitations, our review of specific AMIS operations did not cover the period subsequent to the signing of the Darfur Peace Agreement in May 2006. However, we have added State’s point regarding Sudanese government actions against parties that did not sign the agreement. 5. As noted above, our review did not assess AMIS operations subsequent to the signing of the Darfur Peace Agreement in May 2006, although we have identified instances of violence against the AMIS civilian police since that time, such as (1) the burning of a civilian police station and three vehicles by IDPs in Hassahisa IDP camp at Zalengei and (2) the killing of a civilian police language assistant and the attack of eight civilian police officers by IDPs in Kalma IDP camp at Nyala. Such incidents appear contrary to the portrayal of the relationship between the civilian police and IDPs provided here by State. 6. The report states that the improvements in mortality in Darfur have been attributed, in part, to the humanitarian assistance provide by the United States. 7. We have added this point to footnote 5. Following are GAO’s comments on USAID’s letter dated October 17, 2006. 1. The current number of USAID staff in Darfur has been reduced from as many as 20 people to the current number of 6 staff in Darfur, although the crisis in Darfur has resulted in increased number of IDPs and affected residents that require assistance and a greater number of NGOs and UN agencies operating in Darfur. We understand that USAID does not always have control over staffing decisions and is sometimes limited by staff ceilings set by State. However, we believe that in the absence of complete reporting by NGOs, a reduction in USAID staff, complicated by the current inability of these staff to work in Darfur, affects USAID’s ability to provide comprehensive oversight of U.S.- funded humanitarian assistance in Darfur. 2. We determined that 6 of the 15 required final reports were not submitted by USAID partners and that most of the reports did not provide all required information. The lack of required reporting prevented USAID from fully monitoring NGO performance and measuring the impact of U.S. humanitarian assistance to Darfur. However, we report additional USAID monitoring and evaluation efforts, such as regular communication with NGOs, performed by USAID in Darfur. In addition, we note that such efforts can be limited by issues identified in our report such as travel restrictions imposed by the Sudanese government and continuing insecurity throughout the region. We also have added information to the report noting recent USAID efforts to collect reports from its implementing partners that reportedly resulted in 100 percent compliance with quarterly reporting requirements in July 2006. USAID’s recent emphasis on collecting required reports may improve its ability to conduct oversight of U.S.- funded humanitarian operations in Darfur. 3. In late October, we identified a UN humanitarian profile for July 1, 2006, that had become available to the public. This document stated that the number of IDPs stood at 1.85 million as of July 1, 2006. In addition to the person named above, Emil Friberg, (Assistant Director), Martin De Alteriis, Etana Finkler, Leslie Holen, Theresa Lo, Reid Lowe, Grace Lui, John F. Miller, and Chhandasi Pandya made key contributions to this report. -Led Joint Assessment Mission 10 – 20 December 2005. African Union Mission in Sudan (AMIS), MAPEX Exercise AMIS Renaissance After Action Review, August 2005. The Assessment Mission to Darfur, Sudan 10-22 March 2005: Report of the Joint Assessment Team. Human Rights Watch, “Sudan: Imperatives for Immediate Change, The African Union Mission in Sudan,” January 2006 (available at http://hrw.org/reports/2006/sudan0106). International Crisis Group, “The ’s Mission in Darfur: Bridging the Gaps,” Africa Briefing No. 28, July 2005 (available at http://www.crisisgroup.org/home/index.cfm?l=1&id=3547). International Crisis Group, “The / Partnership in Darfur: Not Yet a Winning Combination,” Africa Report No. 99, October 2005 (available at http://www.crisisgroup.org/home/index.cfm?id=3766). Sally Chin and Jonathan Morgenstein, “No Power to Protect: The African Union Mission in Sudan,” Refugees International, November 2005 (available at http://www.refugeesinternational.org/content/publication/detail/7222). William G. O’Neill and Violette Cassis, “Protecting Two Million Internally Displaced: The Successes and Shortcomings of the African Union in Darfur,” Occasional Paper, The Brookings Institution–University of Bern Project on Internal Displacement, November 2005 (available at http://www.brookings.edu/comm/news/200511_au_darfur.htm).
In 2003, violent conflict in Darfur, Sudan broke out between rebel groups, government troops, and government-supported Arab militias, known as the Janjaweed. The conflict has displaced about 2 million Darfurians and has so affected over 1.9 million others that they require assistance. Since October 2003, the U.S. government has provided humanitarian assistance in Darfur and supported African Union Mission in Sudan's (AMIS) efforts to fulfill a peace support mandate. This report reviews (1) U.S. humanitarian assistance provided to Darfur and the challenges that have been encountered and (2) African Union efforts to fulfill its mandate and challenges that have affected these efforts. The United States has been the largest donor of humanitarian aid to Darfur, obligating nearly $1 billion from October 2003 through September 2006. Although more than 68 percent of this assistance consisted of food aid, U.S. assistance has also supported other needs, such as water and sanitation, shelter, and health care. Since 2003, humanitarian organizations have made significant progress in increasing the number of people in Darfur receiving aid. In addition, malnutrition and mortality rates in Darfur dropped, a trend that U.S. and other officials attribute in part to humanitarian assistance efforts. However, the U.S. Agency for International Development (USAID) and the entities providing U.S. humanitarian assistance have encountered several challenges that have hampered delivery of, or accountability for, humanitarian services in Darfur. These challenges include continued insecurity in Darfur; Sudanese government restrictions on access to communities in need; the timing of funding; and an incapacity to ensure monitoring of, and reporting on, U.S.-funded programs. AMIS has taken several positive actions in Darfur to pursue its mandate, although some actions have been incomplete or inconsistent. For example, to monitor compliance with a 2004 cease-fire agreement--one mandate component--AMIS investigated alleged cease-fire violations and identified numerous violations; however, the resulting reports were not consistently reviewed at higher levels or made fully publicly available to identify those violating the agreement. The U.S. government, via private contractors, provided about $280 million from June 2004 through September 2006 tobuild and maintain 32 camps for AMIS forces in Darfur, according to the Department of State. Numerous challenges have been identified by African Union or U.S. officials, among others, as negatively affecting AMIS's efforts in Darfur. These challenges include inadequacies in AMIS's organization, management, and capacity, such as inconsistent interpretation of the AMIS mandate; its relatively small forces; limited or poorly allocated resources; and a lack of intelligence regarding, and cooperation from, the parties to the conflict. A transition from AMIS to a UN peacekeeping operation is being considered, although the Sudanese government has rejected such a transition. A possible NATO-assisted review of AMIS operations has not been conducted. Meanwhile, insecurity and violence continue in Darfur.
This section provides information on the role and economic value of bees, bee population trends, factors affecting bee health, effects of bee losses on agriculture and ecosystems, and the roles and responsibilities that USDA’s ARS, FSA, NASS, NIFA, and NRCS, and EPA have played with respect to addressing bee health issues. Pollinators—including honey bees, other managed bees, and wild, native bees—are critical to our nation’s economy, food security, and environmental health. Honey bees—nonnative insects introduced to the United States in the 1620s by early settlers—are the most recognizable pollinators of hundreds of ecologically and economically important crops and plants in North America. In 2014, USDA reported that crops pollinated by honey bees directly or indirectly account for up to one-third of the U.S. diet. The most recent study on the value of pollinators to U.S. food and agriculture was published in 2012 and estimated that, as of 2009, the total value of crops that were directly dependent on honey bee pollination, including almonds, apples, and cherries, was almost $12 billion. The study estimated that, also as of 2009, the total value of crops that were indirectly dependent on bees, such as hay, sugar beets, asparagus, and broccoli, was more than $5 billion. In addition, according to a 2015 USDA-NASS report, honey bees produced more than $385 million worth of honey in 2014. Approximately 1,500 to 2,500 commercial U.S. beekeepers manage honey bee colonies, according to an estimate by the American Beekeeping Federation. Many commercial beekeepers travel across the country to provide pollination services for farmers’ crops and to support honey production. According to the 2014 USDA report, in 2012, almonds, sunflowers, canola seed, apples, cherries, and watermelons were among the top crops that were sources of pollination service fee revenue for beekeepers. About 1.6 million honey bee colonies—approximately 60 to 75 percent of all U.S. commercial honey bee colonies—provide pollination services to California’s almond orchards early each spring. Figure 1 shows the estimated acreage of crops for which beekeepers provide pollination services and the location of summer feeding grounds for commercially managed bees. In addition to honey bees, certain managed bees and wild, native bees also provide valuable pollination services. Whereas honey bees comprise an estimated 98 percent of managed bees in the United States, other managed bee species—including bumble bees, alfalfa leafcutting bees, and orchard mason bees—comprise the remaining 2 percent, according to a representative of the Pollinator Stewardship Council. These other managed bees pollinate alfalfa, almonds, apples, cherries, and tomatoes. Wild, native bee species may also pollinate agricultural crops. In 2009, crops directly and indirectly dependent on pollination by other managed bees; wild, native bees; and other insects were valued at almost $10 billion according to the 2012 study of the value of pollinators to U.S. food and agriculture. In addition, a 2007 National Research Council study found that wild, native bees provide most of the pollination in natural plant communities, which contributes to valuable ecosystem services, including water filtration and erosion control. According to the White House Task Force’s 2015 Pollinator Research Action Plan, in 2006, some beekeepers in the United States began to notice unusually high mortality among their honey bee colonies over the winter months. From 2006 to 2014, beekeepers who responded to the Bee Informed Partnership’s nongeneralizable national survey of managed honey bee colony losses reported that an average of about 29 percent of their bee colonies died each winter. Those losses exceeded the approximately 13 to 19 percent winter loss rate that beekeepers indicated in the surveys were acceptable. Furthermore, when winter losses are combined with losses at other times of the year, total annual losses can be higher. For example, a preliminary report from the Bee Informed Partnership indicated that beekeepers who responded reported total annual losses of more than 40 percent of colonies from April 2014 through March 2015. Whereas nongeneralizable data on short-term losses in honey bee colonies are available, the status of other managed bees and most of the wild, native bee species in the United States is less well-known. According to the White House Task Force’s strategy and research action plan, intensive public and private research in the United States and abroad over the past 8 years has shown that no single factor is responsible for the general problems in pollinator health, including the loss of honey bee colonies or declines in other bee populations. The task force stated that bee health problems are likely caused by a combination of stressors. Some of these stressors, in no particular order, include habitat loss, degradation, and fragmentation, including reduced availability of sites for nesting and breeding; poor nutrition, due in part to decreased availability of high quality and pests (e.g., the mite Varroa destructor) and disease (e.g., viral, bacterial, and fungal diseases); pesticides and other environmental toxins; and migratory stress from long-distance transport. Continued losses of honey bees; other managed bees; and wild, native bees threaten agricultural production and the maintenance of natural plant communities. Commercial beekeepers are concerned that honey bee colony losses could reach an unsustainable level for the industry. According to a 2014 USDA report, the cost of honey bee almond pollination services is believed to have risen in connection with the increased cost of maintaining hives in the midst of industry-wide overwintering losses. Officials we interviewed from a commercial beekeeping organization said that, for beekeepers, meeting the growing demand for pollination services in agricultural production has become increasingly difficult, particularly as a result of bee colony losses. Although the number of managed honey bee colonies has been relatively consistent since 1996, ranging from about 2.4 to 2.7 million colonies, the level of effort by the beekeeping industry to maintain colony numbers has increased, according to the White House Pollinator Health Task Force’s strategy. For example, beekeepers face increasing production costs, which include sugar, protein, medications, and miticides (chemicals that kill the mites that can infest bee hives). Furthermore, when winter colony losses are high, beekeepers may compensate for these losses by splitting one colony into two, supplying the second colony with a purchased queen bee and supplemental food to build up colony strength. Using this method, the commercial honey beekeeping industry has generally been able to replenish colonies lost over the winter, but at a cost. These increased maintenance costs can result in increased rental fees for farmers renting the hives. Five USDA agencies within the scope of our review—NASS, ARS, NIFA, FSA, and NRCS—as well as EPA have specific roles and responsibilities with respect to addressing bee health issues. USDA has surveyed beekeepers in the United States since the late 1930s to determine the number of honey bee colonies and the amount of honey produced. The survey, now conducted by NASS, is called the Bee and Honey Inquiry. NASS maintains a list of beekeeping operations in the nation and has been surveying beekeepers in all states except Alaska since the 1970s to gather data on honey bee colonies, including the number of colonies producing honey, total pounds of honey produced, and total value of production by state for a production year. ARS, USDA’s largest research agency, conducts research within several of its laboratories that could protect bee health. NIFA, USDA’s primary agency providing research grants to universities, provides competitive grants to conduct research related to bee health and to disseminate the results through the Cooperative Extension System. CRIS, which is managed by NIFA, contains information on ARS and NIFA research and outreach. CRIS provides documentation and reporting for agricultural, food science, human nutrition, and forestry research, education and extension activities for USDA, including those related to bee health. FSA and NRCS oversee conservation programs that, among other things, help provide habitat for bees. FSA administers the Conservation Reserve Program (CRP), which implements long-term rental contracts with farmers to voluntarily remove certain lands from agricultural production and to plant species that will improve environmental health and quality, such as improving forage plantings for bees and other pollinators. The long-term goal of the program is to reestablish valuable land cover to help improve water quality, prevent soil erosion, and reduce loss of wildlife habitat. NRCS administers the Environmental Quality Improvement Program (EQIP), which implements short- to long-term contracts with farmers to voluntarily implement practices to conserve natural resources and deliver environmental benefits, such as created wildlife habitat, which may benefit bees. In addition, NRCS administers components of the Agricultural Conservation Easement Program, in which plantings may benefit bees or other pollinators. NRCS has primary responsibility for providing to landowners the technical assistance needed to plant the pollinator-friendly habitats. NRCS assists farmers through a network of staff at headquarters, state, and county offices. In addition to supporting overall pollinator habitat across the nation, FSA and NRCS are focusing CRP and EQIP pollination resources on five upper Midwest states (Michigan, Minnesota, North Dakota, South Dakota, and Wisconsin) that are home to a significant percentage of honey bee colonies during the summer months. Under FIFRA, EPA is responsible for regulating pesticides, including those used on crops and other plants and those used by beekeepers to combat bee pests. As part of this responsibility, EPA reviews applications from pesticide manufacturers seeking to obtain a registration for new pesticides or new uses of existing pesticides. Under FIFRA, pesticide registrants are required to report to EPA any information related to known adverse effects to the environment caused by their registered pesticides. In addition, the Food Quality Protection Act of 1996 amended FIFRA to require that EPA begin a review of the registrations of all existing pesticide active ingredients. As further amended in 2007 by the Pesticide Registration Improvement Renewal Act, FIFRA requires all reviews be completed by October 2022. According to EPA’s website, the FIFRA requirement applies to about 1,140 pesticides. EPA has chosen to review the registration of all of these pesticides in about 740 “cases.” A case may cover more than one pesticide active ingredient that are closely related in chemical structure and toxicological profile. The Pesticide Registration Improvement Act of 2003 (PRIA) amended FIFRA to require that EPA issue annual reports containing a review of its progress in carrying out its responsibilities for reviewing new and registered pesticides. Other agencies, including some within USDA, also have programs related to bee health. For example, USDA’s Forest Service has conducted some research and monitoring and conserves habitat to protect bee populations. The U.S. Geological Survey (USGS) within the Department of the Interior (Interior) has monitored wild, native bee populations. Interior’s National Park Service and the National Science Foundation have also funded research on bee health, and Interior’s Bureau of Land Management is making changes to land-management programs by incorporating native, pollinator-friendly plants in its management practices. Five selected USDA agencies conduct monitoring, research and outreach, and conservation to protect bees, but limitations within those efforts hamper the agencies’ ability to protect bee health. In 2015, USDA agencies increased honey bee colony monitoring to better estimate honey bee colony losses nationwide, but as a co-chair of the White House Pollinator Health Task Force with EPA, the department has not worked with task force partners to coordinate a native bee monitoring plan. In addition, USDA has conducted and funded research and outreach, primarily by ARS and NIFA, on the health of different categories of bees, including honey bees and, to a lesser extent, other managed and wild, native bees, but CRIS, which tracks USDA-funded research and outreach, is not currently designed to enable tracking or searching of projects by bee category. Furthermore, USDA’s FSA and NRCS have increased funding and taken other actions to promote bee habitat, but neither agency has a method to count all of the acres that landowners have restored or enhanced to benefit bees and other pollinators, and limitations in their evaluation of those actions may hinder their conservation efforts. USDA agencies have taken some actions to increase monitoring of honey bees, other managed bees, and wild, native bees, but USDA, which co- chairs the White House Pollinator Health Task Force with EPA, has not worked with its partners on the task force to coordinate a native bee monitoring plan. In April 2015, NASS, which conducts USDA bee surveys, initiated colony loss surveys to provide quarterly estimates of honey bee colony losses in the United States. NASS officials told us that the results of these surveys will improve data on colony losses from prior USDA-funded surveys. According to the task force’s strategy, federal agencies plan to use data from these surveys to assess progress toward the strategy’s goal of reducing winter honey bee colony losses to no more than 15 percent by 2025. USDA has conducted surveys of beekeepers in the United States to track the number of honey bee colonies in the country since the late 1930s, but those surveys have not gathered beekeepers’ observations or data about bee health problems. Before NASS’s new surveys, NIFA provided most of the funding for the Bee Informed Partnership to survey beekeepers about colony losses and honey bee health from 2006 through 2015. The surveys showed that, on average, about 29 percent of respondents’ honey bee colonies have been dying over the winter, but the results cannot be generalized beyond the survey respondents. The partnership has used a variety of methods to reach out to all beekeepers in the country and in recent years received responses from over 6,000 beekeepers. However, the partnership has not calculated or estimated response rates to the surveys and has not reported whether nonrespondents might differ from the respondents in terms of survey answers. Because of this, the results cannot represent beekeepers in general. In a letter to the Office of Management and Budget (OMB) commenting on the new NASS survey, the partnership stated that NASS is well- equipped to take over the honey bee colony loss surveys with its new quarterly and annual surveys. According to NASS officials, improvements will be possible in the new NASS surveys in part because NASS maintains a comprehensive list of beekeepers from which it can select a random sample. According to an agency document and official, the quarterly survey will capture data from beekeeping operations with five or more colonies, and operations with fewer than five colonies will receive one annual survey in December. NASS officials said that their estimates of U.S. colony losses during 2015 will be available in May 2016. NASS has also added questions to the annual Bee and Honey Survey on the costs associated with colony maintenance, which may include costs associated with colony losses. In addition, USDA’s Animal and Plant Health Inspection Service (APHIS) has coordinated a national survey of honey bee pests and diseases annually since 2009 with the University of Maryland and ARS. However, that survey does not provide estimates of colony losses in the United States. According to NASS officials, NASS does not conduct surveys to estimate populations or colony losses of other managed bees, such as bumble bees, alfalfa leafcutting bees, and orchard mason bees, because NASS does not consider them to be within the scope of their responsibilities for farm livestock commodities. USDA’s ARS and NIFA conduct and fund limited monitoring activities in agricultural settings to estimate populations and health issues for these other types of managed bees. However, the research action plan established as a priority engaging NASS in collecting data on the commercial sales of nonhoney bee pollinators to understand the economic value of alternative pollinators. To address this priority, NASS included in a new survey on the cost of pollination—which largely focuses on honey bees—questions on the cost to agricultural producers for products such as wildflowers and pollination by other managed bees and native bees. NASS began data collection for this new survey in December 2015. USDA agencies, including ARS and NIFA, have conducted and supported limited monitoring of wild, native bees, according to USDA documents and officials. For example, one NIFA-funded project at Pennsylvania State University begun in 2010 seeks to establish baseline biodiversity and abundance data for native bees in and adjacent to Pennsylvania orchards, determine which species are pollinators, and quantify their relative significance and economic importance, according to the project summary in CRIS. In addition, in 1997, ARS’s laboratory in Logan, Utah, began monitoring wild, native bees in parks, forests, and other areas in the United States as part of their efforts to develop alternative pollinators for U.S. agriculture, according to ARS scientists. In one project, ARS has annually conducted surveys of bumble bee populations for 5 to 8 years at five sites in Nevada, Oregon, and Utah. The goal is to provide insight into natural population dynamics of native bees in native habitat and identify bumble bee population trends by species on the basis of 10 years of surveys. According to the project description, bumble bee declines have been documented over the last decade, but long-term studies of bumble bee community dynamics are lacking, and such monitoring will help determine whether a fluctuation in a bumble bee population is a natural cycle or something unusual. In its 2007 report on the status of pollinators, the National Research Council stated that wild, native bees are arguably the most important and least studied groups of pollinators. The report recommended establishing a baseline for long-term monitoring, and a coordinated federal approach with a network of long-term pollinator-monitoring projects that use standardized protocols and joint data-gathering interpretation. The report also stated that pollinator monitoring programs in Europe have effectively documented declines in pollinator abundance, but there is no comparable U.S. monitoring program. Stakeholders from pesticide manufacturing, university research, and conservation/environmental groups we interviewed said that USDA should take additional actions to monitor wild, native bees because current monitoring is insufficient and will not facilitate provision of trends in these bee populations. Stakeholders from some groups suggested that USDA and other agencies, such as USGS, should coordinate federal monitoring efforts. A stakeholder from a university said that USDA should develop a coordinated assessment policy for native bees to provide information on their status because, without such a policy, agencies will not know which species are declining, endangered, or extinct. The 2014 presidential memorandum on pollinators called for the White House Task Force to assess the status of native bees and other pollinators. The subsequent White House Task Force strategy and research action plan state that native bees are affected by habitat loss and degradation, and that there is strong evidence, for some species, that such factors have led to population declines. For example, the research action plan states that collapses in bumble bee species have been statistically documented, but little is known about trends for wild, native bees, most of which are solitary, rather than social, bees. The research action plan also states that (1) the scope of native bee monitoring is limited by available funding, (2) assessments of native bees’ status rely on disparate historical collection data and limited contemporary surveys, and (3) a survey of bees in various ecosystems is needed to determine the status of native pollinators. The White House Task Force’s research action plan identified several priority actions, with corresponding lead and support agencies responsible for different aspects of the monitoring. For example, the research action plan identifies ARS, USGS, and the Fish and Wildlife Service as three of the lead agencies for the priority actions to develop baseline status data and to assess trends in pollinator populations. And the research action plan identifies NIFA, NASS, the National Science Foundation, the Forest Service, and the National Park Service as primary support agencies for these priority areas. Although the research action plan identifies which agencies have responsibility for monitoring pollinators, it does not identify the development of a mechanism, such as a monitoring plan, to coordinate the efforts of those agencies related to native bees. As of September 2015, USDA did not have plans to work with task force members to coordinate development of such a mechanism for wild, native bees. Some officials said that USDA has not coordinated with other task force agencies to develop a wild, native bee monitoring plan because they were developing the broader task force strategy. The research action plan also does not define and articulate the common outcome or identify specific roles and responsibilities for each lead or support agency. Key practices for agency collaboration that we identified in an October 2005 report call for agency staff to work together across agency lines to define and articulate the common federal outcome or purpose they are seeking to achieve that is consistent with their respective agency goals and mission. Another key practice we identified calls for collaborating agencies to work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. In addition, we identified, in a February 2014 report, key practices for agency collaboration that call for establishing shared outcomes and goals that resonate with, and are agreed upon, by all participants and are essential to achieving outcomes in interagency groups. Furthermore, although the research action plan mentions stakeholders and partnerships, it does not articulate how they will be included in addressing priority actions related to monitoring native bees. In September 2012, another key practice we identified calls for ensuring that the relevant stakeholders have been included in the collaborative effort. This collaboration can include other federal agencies, state and local entities, and private and nonprofit organizations. By developing a mechanism, such as a monitoring plan for wild, native bees that would (1) establish roles and responsibilities of lead and support agencies and their shared outcomes and goals and (2) obtain input from relevant stakeholders, there is better assurance that a coordinated federal effort to monitor bee populations will be effective. One senior USDA official stated that coordinating with the other task force agencies to develop a wild, native bee monitoring plan would be very important for gathering data to show the status of wild, native bees in the future. Key USDA and USGS officials with bee-related management responsibilities agreed that developing such a monitoring plan would help them establish a consistent approach across their agencies. The officials also said that USDA and other agencies should establish a team of federal scientists to coordinate the development of a federal monitoring plan for wild, native bees that would establish monitoring goals and standard methods and involve state and other stakeholders. Some USDA and USGS officials said that without a team to coordinate a monitoring plan, individual agency efforts may be ineffective in providing the needed information on trends in wild, native bees in the United States. USDA has conducted and funded research and outreach, primarily by ARS and NIFA, on the health of different categories of bees, including honey bees and, to a lesser extent, other managed and wild, native bees, but CRIS, which tracks USDA-funded research and outreach, does not currently facilitate tracking or searching of projects by bee category. ARS’s honey bee projects have focused on projects for many health concerns. For example, the ARS laboratory in Baton Rouge, Louisiana, has focused for many years on breeding honey bees that are resistant to Varroa mites. Also, ARS’s laboratory in Beltsville, Maryland, has conducted research to develop management strategies for diagnosing and mitigating disease, reducing the impacts of pesticides and other environmental chemicals, and improving nutrition. ARS’s laboratory in Logan, Utah, is identifying how farmers may use different pollinators, including managed and wild, native bees. This research includes developing methods for mass production, use, and disease control for a selection of bees. ARS scientists have regularly disseminated the results of their research at national, regional, state, and local bee-related conferences and events. ARS officials have also conducted outreach at meetings to provide information to commodity growers, such as the Almond Board of California. One ARS scientist noted that he had attended 27 state and other types of beekeeper meetings over the past 5 years. Another ARS scientist told us that he spends about 25 percent of his time conducting outreach with beekeepers. In addition, ARS scientists have published dozens of articles summarizing their research results in scientific journals. From fiscal year 2008 through fiscal year 2015, ARS obligated $88.5 million for projects focused on bee health and $1.6 million for projects on the effect of pollination by different types of bees on crop or plant production. Of the $88.5 million obligated, our analysis determined that $72.6 million was for projects primarily focused on honey bee health, an additional $6.3 million was for projects with a combined focus on the health of honey bees and other bees, and $9.6 million was for projects focused only on other managed bees or wild, native bees. According to ARS officials, all ARS funding for research on wild, native bees has been for the purpose of developing new uses for managed bees in commercial agriculture. Unlike ARS, which itself conducts research, NIFA provides funds for research through grants. For fiscal years 2008 through 2014, NIFA’s competitive grants for research on bee health were largely focused on honey bees, with some efforts focused on managed and wild, native bees. For example, NIFA obligated funds for a 2012 grant to a team of scientists and outreach specialists at Michigan State University, the University of California-Davis, and other institutions that works with growers to develop best practices for pollinator habitat enhancement and farm management practices to bolster managed and wild, native bee populations. The project is examining the performance, economics, and farmer perceptions of different pollination strategies in various fruit and vegetable crops, according to the project website. These strategies include complete reliance on honey bees, farm habitat manipulation to enhance suitability for native bees, and use of managed, native bees alone or in combination with honey bees. For fiscal years 2008 through 2014, NIFA obligated $29.9 million on competitive grant projects focused primarily or partially on bee health, and $11.6 million on projects focused on pollination effectiveness. Of the $29.9 million, our analysis of individual grant project objectives and descriptions determined that NIFA provided $16.7 million to projects on honey bee health, $9.8 million to projects on the health of honey bees and other bees, and $3.4 million to projects on the health of wild, native bees. In addition to funding competitive grants, NIFA provides support for bee research at land-grant institutions through capacity grants to the states on the basis of statutory formulas. From fiscal year 2008 through fiscal year 2014, these institutions expended $10.7 million in NIFA grants for research related to bees. Furthermore, state institutions have used NIFA capacity grants to support bee-related extension and education activities through the Cooperative Extension System, such as teaching best management practices to beekeepers, according to an agency budget official. However, because NIFA and its partners do not track capacity grant funding related to extension activities by subject, we were not able to determine the amount of extension funding dedicated to bee-related activities. In addition, according to estimates by the Economic Research Service, overall research funding has declined in inflation-adjusted dollars, from 1980 to 2014, which may have resulted in a reduction in the number of cooperative extension bee specialists. According to NIFA officials, about 28 bee specialists are currently supported by the Cooperative Extension System in the United States and its territories. That number has declined from an estimated 40 extension bee specialists in 1986, largely due to funding reductions. In addition, according to NIFA officials, the reduction in extension funding may have reduced expertise in related areas, including Integrated Pest Management (IPM), which focuses on long-term prevention of pests or their damage through a combination of techniques, such as biological control, habitat manipulation, modification of cultural practices, and use of resistant varieties, paired with monitoring to reduce unnecessary pesticide applications. IPM extension agents routinely advise farmers on alternatives to pesticides and pesticide application methods that reduce risk to bees and other pollinators. USDA’s CRIS provides overall funding data and descriptions of bee- related research and outreach but does not facilitate tracking projects and funding by the categories of bees addressed by the White House Task Force’s strategy and research action plan. In addition, the research action plan identifies key research needed to fill knowledge gaps for honey bees; other managed bees; wild, native bees; and other pollinators. However, the three categories for bees and other pollinators used in CRIS to code USDA projects are “honey bees,” “bees, honey and other pollinators,” and “other pollinators,” so that bee-related research projects that could help fill the identified knowledge gaps may not be easily identified in CRIS. For example, NIFA guidance on reviewing certain competitive grant applications states that national program leaders must check CRIS to determine if the proposed work has already been funded by NIFA or ARS and to ensure that it is not unnecessarily repeating work not yet published. In addition, ARS guidance directs the agency’s scientists to search CRIS for potentially duplicative projects when preparing project plans. Because projects may have multiple objectives, it would be time-consuming to readily identify and track completed and ongoing bee-related research by category of bee. Both the NIFA staff and the researchers would have to search the codes for the up to three different CRIS categories and then review the descriptions and the multiple objectives for all projects with those codes. By updating the categories of bees in CRIS to reflect the categories of bees discussed in the White House Task Force’s strategy and research action plan, USDA could increase the accessibility and availability of information about USDA-funded research on bees. Senior USDA officials said that CRIS would be more useful within the department and to others seeking to identify bee-related research projects and project funding by topic if USDA revised it to indicate the categories of pollinators that are consistent with the research action plan. ARS and NIFA officials agree improvements to CRIS could help managers track research spending over time by the categories of bees identified in the research action plan. One NIFA official estimated that revisions to CRIS could be done cost- effectively using minimal staff time. FSA and NRCS have taken many actions to promote bee habitat conservation since 2008, but limitations in research, tracking of pollinator habitat, and evaluation of the agencies’ conservation efforts could hinder those efforts. The Farm Bill of 2008 authorized USDA to encourage the use of conservation practices that benefit native and managed pollinators and required that USDA review conservation practice standards to ensure the completeness and relevance of the standards to, among other things, native and managed pollinators. In August 2008, and again in May 2015, NRCS in partnership with the Xerces Society and San Francisco State University published guidance identifying several conservation programs, including CRP, EQIP, and NRCS’s Conservation Stewardship Program (CSP) that could be used to promote pollinators on working lands. This guidance identified 37 practices to create or enhance pollinator habitat by providing more diverse sources of pollen and nectar, and shelter and nesting sites, among other things. According to FSA and NRCS officials, CRP and EQIP are the largest USDA private land conservation programs benefiting pollinators. Participants voluntarily sign up or enroll in FSA or NRCS conservation programs and in specific practices within those programs. As of August 2015, FSA had over 132,000 acres enrolled in pollinator-specific CRP practices, with a remaining allocation of 67,000 acres that could be enrolled under these practices. In 2014, FSA announced an additional $8 million in incentives to enhance CRP cover crops to make them more pollinator-friendly. FSA is offering incentives to CRP participants in the five states that are home to most honey bee colonies during the summer—Michigan, Minnesota, North Dakota, South Dakota, and Wisconsin—to establish pollinator habitat. According to an FSA official, because CRP participants began to implement habitat enhancements in fiscal year 2015, FSA does not yet have information on the number of acres of habitat established. Also, within CRP, the State Acres for Wildlife (SAFE) initiative allows agricultural producers to voluntarily enroll acres in CRP contracts for 10 to 15 years. In exchange, producers receive annual CRP rental payments, incentives, and cost-share assistance to establish, improve, connect, or create higher-quality habitat. As of November 2015, the SAFE initiative was providing pollinator habitat in Michigan, Ohio, and Washington. For example, the goal of the Michigan Native Pollinators SAFE project is to enroll 2,500 acres of enhanced habitat over the next 5 years to benefit native pollinators. In addition, in fiscal year 2014, NRCS provided more than $3.1 million in technical and financial assistance to EQIP participants in the five states that are home to most honey bee colonies during the summer to implement conservation practices that would provide pollinator habitat. This funding led to over 220 contracts with participants to establish about 26,800 acres of pollinator habitat, according to NRCS data. NRCS made $4 million available in fiscal year 2015 through EQIP for honey bee habitat. NRCS also funds other conservation programs that can benefit bees and other pollinators. For example, the CSP provides financial and technical assistance for participants whose operations benefit pollinators. From 2012 through 2014, 17,500 acres were enrolled in one beneficial CSP practice intended to improve habitat for pollinators and other beneficial insects. Another CSP practice for grazing management may benefit pollinators, but the acreage that benefits pollinators is unknown, according to an NRCS official. In addition, NRCS offices in several states, including Montana and South Dakota, seek to benefit pollinators with upland habitat restoration funded by the Wetlands Reserve Program. NRCS and FSA have taken steps to provide information to field offices, agricultural producers, and others that is useful for pollinator habitat conservation programs. For example, in collaboration with the Xerces Society and academic partners, NRCS has revised and expanded lists of plants that benefit bees and technical guidance for conserving pollinator forage. The NRCS Conservation Innovation Grants program has supported several projects across the country designed to demonstrate the value of habitat for pollinators, as well as to expand and improve NRCS’s capacity to establish and monitor high-quality bee forage sites. The task force’s strategy notes that FSA is working collaboratively with NRCS to promote the use of more affordable, pollinator-friendly seed mixes on CRP land. Some NRCS Plant Materials Centers—which evaluate plants for conservation traits and make them available to commercial growers who provide plant materials to the public—have pollinator forage demonstration field trials under way to determine and demonstrate the effectiveness of forage planted for pollinators. In addition, FSA, NRCS, and Interior’s USGS and Fish and Wildlife Service have funded a website that provides information on plant-pollinator interactions to help agencies improve pollinator seed mixes for programs such as CRP and EQIP, according to a USGS official. USGS manages this website, known as the Pollinator Library, to provide information on the foraging habitat of pollinating insects with the goal of improving their habitat. The Pollinator Library is to help users determine which flowers that various insects, including native bees, prefer. The website includes a search feature so users can determine, for example, what types of pollinators have been found on different plant species, by state and land type (such as CRP land). Knowing which flowers pollinators prefer is useful to agencies creating seed mixes for CRP and EQIP habitat enhancement efforts. While USDA agencies have taken steps to improve bee habitat, according to USDA officials and documents, limitations related to (1) research on bee habitat and forage, (2) tracking acres of restored or enhanced pollinator habitat, and (3) evaluating NRCS and FSA conservation efforts, could hinder conservation efforts. Research on Habitat and Forage As part of the task force’s strategy and research action plan, federal officials evaluated completed research and determined that additional research on bee forage and habitat is needed to support NRCS, FSA, and other entities’ conservation efforts. The task force’s research action plan notes that there is much more to learn about the relationships between plants and pollinators, including identifying habitat with the greatest potential for pollinator benefits; developing locally-adapted plant mixes to provide resources for pollinators throughout the year; designing a means for properly collecting, processing, storing, and germinating sufficient seeds for restoration; and developing new concepts and techniques to understand how to establish a broad mix of plants required for restoration based on different factors—e.g., cost-effectiveness and site properties. In addition, the research action plan identifies priority research actions for federal agencies. For example, one priority action is developing a science-based plant selection decision support tool to assist land managers. According to the research action plan, this tool would help land managers use the most effective and affordable plant materials currently commercially available for pollinator habitat in wildland, agricultural, or urban areas. The strategy for carrying out this action in 2 to 3 years, according to the research action plan, is to identify existing science capacity to produce the decision-support tool. The research action plan identifies ARS, NRCS, and USGS as able to provide collaborative leadership for this action within the Plant Conservation Alliance (PCA). Another priority action is developing a system for monitoring the use of native plant materials. According to the research action plan, the strategy for this action is within 2 years to develop an interagency, online, searchable database to collect and analyze relevant data efficiently (e.g., species, plant material type, location, acreage, year, establishment, impacts on pollinators) to evaluate the use of native plant materials. The research action plan identifies ARS and NRCS as sharing collaborative leadership within the PCA for this action with the U.S. Forest Service and Interior’s USGS and Bureau of Land Management. Tracking Acres of Restored and Enhanced Pollinator Habitat In response to the June 2014 presidential memorandum on pollinators, the task force established an overarching goal on pollinator habitat acreage of restoring and enhancing 7 million acres of land for pollinators over the next 5 years through federal actions and public-private partnerships. Under the task force’s strategy, USDA agencies, including FSA and NRCS, are to contribute to this goal. FSA and NRCS are able to track acres of pollinator habitat restored and enhanced under pollinator- specific initiatives and practices, according to agency officials. However, they are unable to track acres on which landowners implement practices for other conservation purposes, such as for erosion control, improved water quality, or wildlife habitat, that may also have an additional benefit for pollinators, according to agency officials. According to FSA and NRCS officials, developing a method for tracking most acres with conservation practices benefiting pollinators will be time-consuming and may require some form of estimation. For example, according to FSA officials, the agency may be able to estimate acres of pollinator habitat using information it has on the types of plants landowners have planted. Nevertheless, by developing an improved method, within available resources, to track conservation program acres that benefit pollinators, FSA and NRCS would be better able to measure their contribution to restoring and enhancing the acres called for by the task force strategy’s goal. Both agencies agreed that developing an improved method for tracking acres on which pollinator habitat has been restored or enhanced would provide valuable information. As of November 2015, the agencies had begun to discuss and consider methods they might use to track acres on which pollinator habitat has been restored or enhanced but had yet to develop an improved method. Evaluating FSA and NRCS Conservation Efforts USDA has funded two evaluations of the effectiveness of FSA and NRCS conservation efforts related to pollinator habitat. First, in 2013, FSA and NRCS began jointly supporting a USGS study to evaluate the effect of CRP and EQIP plantings on honey bee health and productivity in five Midwestern states—Michigan, Minnesota, North Dakota, South Dakota, and Wisconsin. According to a January 2015 USGS progress report, the monitoring will quantify the effect USDA conservation lands have on honey bee health and productivity. For example, USGS is comparing the health of honey bee colonies in areas dominated by row crops with the health of colonies located in areas with significant CRP and pasture acreage. The evaluation has begun to show which weeks or months may have a shortage of blooming forage. USGS plans to expand this evaluation in 2016 to additional sites in Michigan and Wisconsin and add a demonstration project to monitor the effect of CRP and EQIP plantings on orchards, according to a USGS official. Information generated from this USGS evaluation will be used to improve pollinator seed mixes for CRP and EQIP, according to FSA and NRCS officials. Second, in 2014, the Pollinator Partnership, under a cooperative agreement with NRCS, issued an independent evaluation of how NRCS field offices were promoting, implementing, monitoring, and documenting pollinator habitat efforts in conservation programs in several states. This evaluation concluded, among other things, that NRCS field offices were eager to support pollinators, but agency staff needed additional expertise to advise landowners how to implement effective conservation practices. However, NRCS has not conducted an evaluation to show where there may be gaps in expertise and how they might be filled; for example, whether the gaps should be filled through additional formal training for staff or through the informal learning that occurs when field staff, using technical assistance funding, monitor the field work to determine which plants are thriving and attracting bees. According to NRCS officials, headquarters’ evaluations of pollinator habitat have been limited, in part, because the agency has been focused on implementing the plantings. The NRCS National Planning Procedures Handbook directs an evaluation of the effectiveness of the implemented plan to ensure it is achieving its objectives. The officials said that increased evaluation would be helpful because, while each state office has a biologist and other conservation experts, including partner biologists from nonprofit organizations, there are gaps in technical expertise on pollinator habitat available to some field offices. As a result, some field offices have less ability to effectively plan and monitor pollinator habitat. One university stakeholder suggested that NRCS ensure that each of the approximately 30 states with a significant need for pollinator habitat has a native bee expert. NRCS officials said an evaluation of field office efforts to restore or enhance bee habitat could help identify where expertise gaps occur. Another NRCS official said that the agency could survey its staff to gather their views on the need for additional training or expertise. In addition, one NRCS official said that on-site evaluation of the success of the pollinator habitat is important to understanding the effectiveness of the technical assistance. NRCS officials also said that additional evaluation is needed to determine if technical assistance funding is adequate to support conservation planning efforts for different pollinator habitats across the country. NRCS funding for technical assistance enables field staff to develop conservation plans for landowners and to assess the implementation of those plans. NRCS’s financial assistance funding to landowners helps pay to implement conservation plans. If technical assistance funding is too low, the effectiveness of conservation efforts may be compromised, according to NRCS officials. As total funding for NRCS conservation programs has increased, the percentage available for technical assistance has decreased relative to financial assistance. In 2014, funding for technical assistance was proportionally half of what it was in 2002, relative to the amount of financial assistance that it supported in terms of conservation planning and monitoring. Specifically, according to NRCS officials, for every dollar provided for financial assistance in 2002, about $1.22 went to technical assistance. However, in 2014, for every dollar provided for financial assistance, about 59 cents was provided for technical assistance. According to USDA officials, the reduced percentage of funding devoted to technical assistance has resulted in NRCS field office staff having less time to plan for and ensure the quality of conservation efforts, including pollinator habitat, because the staff must spend more time in the office managing contracts and ensuring that all financial assistance dollars are obligated. By increasing evaluation of its habitat conservation efforts, including gaps in expertise and technical assistance funding available to field offices, USDA could better ensure the effectiveness of its efforts to restore and enhance bee habitat plantings across the nation. EPA has taken steps to address pesticide threats to bees, but potential threats remain. Among other steps, in 2013, EPA revised the label requirements for certain pesticides and in 2015, proposed revisions for certain additional pesticides that are acutely toxic in an effort to reduce bees’ exposure. Since at least 2009, EPA has encouraged beekeepers and others to report bee kill incidents potentially associated with pesticides, but agency officials and others point to challenges to accurate reporting and data collection on these incidents. EPA has also encouraged state and tribal governments to voluntarily develop plans to work with farmers and beekeepers to protect bees from pesticides. EPA has revised its guidance for assessing the risks new and existing pesticides pose to bees, but there are limitations to the approach, including a lack of data on pesticides’ risks to nonhoney bees and risks that pesticide mixtures pose to bees. Changes to EPA’s risk assessment approach will likely extend its schedules for reviewing the registrations of some existing pesticides—including many that are known to be toxic to bees—as the agency gathers and reviews additional data on risks to bees. However, EPA has not revised the publicly available work schedules for pesticides currently under review. In August 2013, EPA directed the registrants of four pesticides in a class of chemicals known as neonicotinoids to submit an amendment to revise the labels of products containing those pesticides that were registered for outdoor use on plant foliage. Neonicotinoids are insecticides that affect the central nervous system of insects, causing paralysis and death. Pesticide labels contain directions for use and warnings designed to reduce exposure to the pesticide for people and nontarget organisms, including beneficial insects such as bees. It is unlawful to use any pesticide in a manner inconsistent with its labeling. In proposing the label changes, EPA cited the possible connection between acute exposure to particular pesticides and bee deaths. EPA called for the labels to have a pollinator protection box (also called a “bee advisory box”) and new language outlining the directions for the products’ use, in addition to any restrictive language that may already be on the product labels. The agency directed the registrants to submit revisions to their product labels with EPA’s prescribed language no later than September 30, 2013, and told the registrants that it anticipated that the new product labels would be in place in 2014. The new language for the pollinator protection box warns of the threat the pesticide poses to bees and other pollinators and instructs the user to follow the new directions for use. The directions for use restrict the use of the pesticide on crops and other plants at times when bees are foraging on those plants. More specifically, the directions generally prohibit foliar use, or use on leaves, until flowering is complete, and all petals have fallen from the plants. However, the new directions for use allow for exceptions to the prohibition under certain conditions, which vary, depending on whether or not managed bees are on-site to provide contract pollination services. In November 2014, EPA staff told us the label changes to the four neonicotinoids had led to confusion for pesticide users and resentment by some stakeholder groups, but that the agency planned to address these concerns through additional label changes for those and other pesticides that are acutely toxic to bees. In particular, according to EPA officials, pesticide users found that new label language, in some instances, contradicted other parts of the label or was poorly defined. In May 2015, EPA requested public comments on a proposal to make label changes restricting the use of some products containing acutely toxic pesticides on pollinator-attractive crops when managed bees are present for the purpose of providing pollination services, saying that “clearer and more consistent mandatory label restrictions could reduce the potential exposure to bees from pesticides categorized as acutely toxic to bees.” The deadline for public comments on EPA’s proposal was June 29, 2015. Subsequently, that deadline was extended to August 28, 2015. According to EPA officials, as of October 2015, the agency was in the process of reviewing more than 100,000 comments on the label proposal; in part due to the number of comments, the officials said they could not estimate when the agency will finalize the proposal. Since at least 2009, EPA has encouraged beekeepers and others to voluntarily report bee kill incidents—that is, when bees in or near a hive are killed by a suspected exposure to a pesticide, according to agency officials. EPA records reports of bee kills that may have been associated with pesticide use in its Ecological Incident Information System (EIIS) database on adverse pesticide incidents. When EPA receives reports of bee kill incidents, according to agency officials, it considers a range of evidence to evaluate the probability that a specific pesticide was the cause. The evidence could include information about pesticide use near the incident, the known toxicity of the pesticides used in the area, and physical or observational evidence associated with the affected bees. After considering the evidence, EPA categorizes the likelihood that a specific pesticide was associated with the bee kill as highly probable, probable, possible, unlikely, or unrelated. In total, the EIIS data include 306 unique bee kill incidents occurring from 1974 through 2014 and another 90 incidents with no associated year. Of this total of 396 incidents, EPA found sufficient evidence to categorize 201 as highly probable or probable. The 201 incidents were associated with 42 pesticides. (The EIIS data show that 3 bee kill incidents were highly probable or probable but name no specific pesticide.) According to agency officials, EPA encourages the public to report incidents to their state lead agency (typically the state’s department of agriculture) so that such incidents can be properly investigated. Recognizing that some members of the public may not feel comfortable with reporting to their state officials, EPA’s website and the “bee advisory boxes” added to certain pesticide labels identify additional options for the public to voluntarily report bee kill incidents. These include reporting through [email protected], an e-mail address monitored by EPA’s Office of Pesticide Programs or to report incidents to the National Pesticide Information Center. In addition, EPA enters into cooperative agreements with states. Through these agreements, EPA may delegate certain authority to states to cooperate in enforcing FIFRA. One condition of the cooperative agreement is that states must report information on all known or suspected pesticide incidents involving pollinators to [email protected] and send a copy to the relevant EPA regional office. EPA stores data on incident reports from the public, the National Pesticide Information Center, and the states in its EIIS database. Several factors may contribute to underreporting of bee kill incidents, according to EPA staff and others we interviewed. According to officials from EPA and beekeeping and environmental organizations, beekeepers may be reluctant to report bee kills to state agencies or to EPA for one or more of three reasons. First, beekeepers may want to avoid conflicts with farmers with whom they have an arrangement for providing pollination services or for obtaining access to forage for honey production, even if the farmer’s pesticide application practices may have contributed to the incident. Second, beekeepers may want to avoid investigations that may suggest the beekeeper’s hive management practices—specifically, the use of miticides or other pesticides to combat hive pests—contributed to the incident. Third, according to a senior EPA official in the Office of Pesticide Programs, some beekeepers believe that submitting reports in the past has not resulted in a positive response from regulatory authorities and, therefore, is not worth the effort. According to the senior EPA official, other challenges exist that may make bee kill incident reports inaccurate. For example, beekeepers may not be able to frequently monitor their colonies, so incidents may not be discovered for several days; the passage of time may hamper a conclusive investigation. Honey bees forage over an extensive range. Therefore, it may be difficult to determine to which crops and pesticides they have been exposed. Finally, according to the EPA official, the states have increasingly limited budgets to support bee colony inspection programs and pesticide incident inspection programs in general, and may not be able to fully investigate reported incidents. In addition to the voluntary incident reports from beekeepers and others, FIFRA requires that pesticide registrants report factual information they are aware of concerning adverse effects associated with their products— including the death of nontarget organisms such as bees. The information reported by a registrant is known as a FIFRA 6(a)(2) Incident Report. However, according to EPA staff, FIFRA 6(a)(2) reports are not particularly useful in providing details on bee kills because FIFRA and its implementing regulations do not require registrants to identify bees as the species harmed by a pesticide. Instead, bees are recorded within a larger category of “other nontarget” organisms. In addition, registrants do not need to report individual incidents involving “other nontarget” organisms when they occur. Instead, registrants can “aggregate” incidents that occur over a 90-day period and report those aggregated data to EPA 60 days after the end of the 90-day period. While these FIFRA reporting requirements apply generally to pesticide registrants, as we noted earlier, EPA modified its requirements for the registrants of four neonicotinoid pesticides. In its July 22, 2013, letter notifying the registrants of its plans to modify the pesticides’ labels to be more protective of bees, EPA also instructed the registrants to report bee kill incidents within 10 days of learning of the incident and that information on bee kills must not be aggregated, regardless of the number of individual pollinators involved in any incident. In response to a directive from the June 2014 presidential memorandum on pollinators, EPA has encouraged state and tribal environmental, agricultural, and wildlife agencies to voluntarily develop managed pollinator protection plans (protection plans) that focus on improved communication between farmers and beekeepers regarding the use of pesticides and the proximity of managed bees. EPA is working with two organizations to encourage states and tribes to implement the protection plans: (1) the State-FIFRA Issues, Research, and Evaluation Group (SFIREG) and (2) the Tribal Pesticide Program Council. In December 2014, SFIREG issued draft guidance for state lead agencies for the development and implementation of state protection plans. According to the guidance, the scope of the plans is limited to managed bees not providing contracted pollination services at the site of application. As such, the protection plans are intended to reduce pesticide exposure to bees that are adjacent to, or near a pesticide treatment site where bees can be exposed via drift or by flying to and foraging in the site of application. According to SFIREG’s draft guidance, many of the strategies to mitigate the risk of pesticide exposure to managed pollinators are also expected to reduce the risk to native bees and other pollinators. The voluntary protection plans would supplement EPA’s proposal to make label changes restricting the use of acutely toxic pesticides, described above, to protect managed bees that are under pollination contracts between farmers and beekeepers. According to the task force’s strategy, one of the key elements of the state protection plans are the metrics that will be used to measure their effectiveness in reducing honey bee losses. Those metrics, according to the strategy, may differ across states and tribes. Because the development of the protection plans is voluntary, EPA will not approve or disapprove them, and measures of the plans’ effectiveness will be state- or tribe-specific, according to agency officials. According to EPA officials, as of January 2016, seven states had protection plans in place: Arkansas, California, Colorado, Florida, Iowa, Mississippi, and North Dakota, while all but a few of the other states had protection plans in some stage of development. In addition, EPA provided funding for a November 2015 training program to address tribal pollinator protection plans. Stakeholders we interviewed who commented on this topic generally supported EPA’s efforts to encourage pollinator protection plans. Stakeholders’ views on protection plans are summarized in appendix II. In June 2014, EPA issued guidance advising the agency’s staff to consider requiring pesticide registrants to conduct additional studies on the risks that new or existing pesticides may pose to bees and bee colonies for pesticides going through the registration or registration review processes. The 2014 guidance formalized interim guidance issued in 2011. EPA summarized the need for the risk assessment guidance in a 2012 White Paper to the FIFRA Scientific Advisory Panel that noted that the lack of a clear, comprehensive and quantitative process for evaluating pesticide exposure and subsequent risk to bees from different routes of exposure was a major limitation. The guidance may result in registrants conducting additional studies on the toxicity of new and existing pesticides on honey bees. It also allows for several methods of characterizing pesticide risk. However, EPA’s 2014 risk assessment guidance relies largely on honey bees as a surrogate for other bee species. In addition, the guidance does not call for EPA to assess the risks that pesticide mixtures may pose to bees. EPA’s June 2014 guidance calls for agency staff to consider requiring pesticide applicants or registrants to conduct additional studies on the toxicity of their pesticides to honey bees. The guidance applies to EPA’s review of new pesticide registration applications and its ongoing review of existing registrations. EPA has used, and continues to use, a three-tiered approach for assessing the risks that pesticides may pose to bees (and other organisms). That is, the agency may require additional studies—in Tiers II and III—from pesticide applicants or registrants, depending on the results of any Tier I studies that it required. Therefore, under the June 2014 guidance, EPA staff are to consider a range of studies that examine different life stages of honey bees (adult and larval), different types of toxicity (acute and chronic), and different types of exposure to pesticides (contact and oral). Studies may be conducted in laboratories on individual bees (Tier I), as “semi-field” tests of small colonies (Tier II), or as field tests of whole colonies (Tier III). EPA may also consider other lines of evidence, including open scientific literature and incident reports. Another aspect of assessing the risk of pesticides is deciding which chemicals within a pesticide product are to be studied. EPA’s June 2014 guidance addresses this issue but leaves it to the discretion of agency staff. Specifically, EPA’s June 2014 guidance states that toxicity data using the end-use product may be needed if data suggest that a typical end-use product is potentially more toxic than the active ingredient, and bees may come directly in contact with the product. The guidance also calls for agency staff to consider the effects that systemic pesticides applied to seeds or in the soil may have on honey bees. Systemic pesticides applied to plants and soil can move through the plant to other plant tissues, potentially contributing to quantities of pesticide residues in pollen and nectar. EPA regulations identify three honey bee studies as required, or conditionally required, and EPA’s 2014 guidance suggests additional bee toxicity studies that agency staff might consider requiring. EPA staff we interviewed acknowledged that additional steps are needed to establish study guidelines, but said that the agency has the authority under FIFRA to require and review any studies that it deems necessary to determine whether a pesticide will have unreasonable adverse effects. In addition, as of October 2015, EPA had not yet issued guidelines for the new types of studies that registrants may be required to submit. However, in July 2015, EPA announced on its website that it was considering a proposal within 12 months that would update and codify the data requirements needed to characterize the potential risks of pesticides to bees and other pollinators. In the meantime, registrants may conduct three of the additional studies—acute adult oral toxicity, acute larval toxicity, and semi-field testing with whole colonies—using guidelines developed by the Organization for Economic Cooperation and Development (OECD). EPA officials told us that, as of October 2015, formal guidelines did not exist for chronic toxicity testing with adult bees and chronic toxicity testing with bee larvae but said that EPA is contributing to international efforts to develop formal guidelines, including draft guidelines on chronic toxicity with bee larvae. In addition, the task force’s strategy stated that standardized guidelines may not be developed for field studies (Tier III) because “these studies are intended to address specific uncertainties identified in lower tier tests.” Instead, according to agency officials, EPA will have to agree on specific Tier III protocols proposed by the pesticide applicant or registrant for particular pesticides. EPA’s June 2014 risk assessment guidance for honey bees allows the agency to use tiered studies in reviewing registration applications for new pesticides and new uses for existing pesticides. EPA’s review of registration applications for four pesticides—cyantraniliprole, oxalic acid, sulfoxaflor, and tolfenpyrad—provides examples of how the agency’s use of its 2011 interim and 2014 final guidance and its call for bee-related studies can vary. Because EPA’s risk assessment approach for this guidance is a tiered one, the agency staff uses its discretion when requiring registrants to conduct toxicity studies. For example, EPA approved oxalic acid for a new use as a miticide to combat Varroa mites in bee hives without requiring its own Tier II or Tier III studies. According to EPA staff, the agency relied on existing data from Canada that shows the pesticide has low acute toxicity and is effective at killing Varroa mites without harming bee colonies. For the other three pesticides, which were registered before the 2014 final guidance was issued, EPA reviewed varying numbers and types of studies but did not require all of the types of studies described in the new risk assessment guidance. However, EPA decided, on the basis of the studies that were done, to place restrictions on the pesticides’ use in order to reduce bees’ exposure. For example, EPA did not require Tier III studies for sulfoxaflor but used the results of Tier I and Tier II studies as the basis for reducing the amount of the insecticide that was allowed to be applied per acre under the pesticide’s 2013 registration. In addition, cyantraniliprole and tolfenpyrad are among the acutely toxic pesticides covered by EPA’s May 2015 proposal to make label changes restricting the use of acutely toxic pesticides. A finding from study results that a pesticide is toxic to bees (or other organisms) does not necessarily mean that EPA will disapprove an application for registration. Under FIFRA, throughout the tiered process, EPA considers whether mitigation measures (e.g., changes to application rates, the timing of applications, or the number of applications) are sufficient to reduce exposure to a level at which risk estimates are below levels for concern, while also taking the benefits from using the pesticide into consideration. While EPA’s June 2014 pesticide risk assessment approach provides for the inclusion of data on additional bee species where available, it relies primarily upon data from honey bees as a surrogate for all bee species. However, other bee species may be affected differently by pesticides. EPA acknowledges in its guidance that there are limitations to using honey bees as surrogates but maintains that honey bees can provide information relevant to other species, and that adequate, standardized tests are not yet available for other species. EPA is involved in international efforts to develop standardized tests for other bee species and has been directed by the task force’s strategy with researching risk assessment tools for nonhoney bee species. However, EPA does not have a schedule for expanding the risk assessment process to other bee species. Stakeholders we interviewed from farming, commercial beekeeping, university, and conservation/environmental groups said EPA should expand its risk assessment process to include testing the effects of pesticides on pollinators other than the honey bee, including other commercial, or managed, and wild, native bees. Several of these stakeholders specified that EPA should develop testing models and guidelines for other types of bees, such as solitary and bumble bees. EPA’s September 2012 White Paper attributed the agency’s focus on honey bees to two factors: (1) honey bees are considered the most important pollinator in North America from a commercial and ecological perspective and (2) standardized tests on the effects of chemicals are more developed for honey bees than for other managed bee species, such as the alfalfa leafcutting and orchard mason bees. However, the White Paper also noted that there are an estimated 4,000 species of wild, native bees in North America and more than 20,000 worldwide. These wild, native bees also provide important pollination services. Other managed and wild, native bee species may be exposed to pesticides through different routes, at different rates, or for different durations than honey bees, all of which may influence the effects of pesticides. The White Paper concluded that there was a clear need for a process to assess risks to species other than honey bees, owing to potential differences in sensitivity and exposure compared to honey bees. While noting the importance of assessing risks to diverse bee species, the White Paper also cited a 2012 European Food Safety Authority conclusion that published laboratory, semi-field, and open field test methods for other species (i.e., bumble bees, orchard mason bees, leafcutting bees, and alkali bees) needed further development. In its December 2012 review of EPA’s White Paper, the FIFRA Scientific Advisory Panel recommended that EPA require testing on at least one additional species to address the goal of protecting diversity. The FIFRA panel stated that alfalfa leafcutting bee and orchard mason bees are the easiest to include for Tier I testing, adding that these bees are commercially available in large numbers and would be fairly easy to use for higher-tiered tests. In addition, the panel noted that bumble bees are also available commercially, and considerable research is available on how to raise them, so they would be useful for Tier II tests, although with limitations. EPA’s June 2014 risk assessment guidance stated that, as the science evolves, methods and studies using other bee species may be considered and incorporated into risk assessments. The task force’s strategy stated that uncertainty is created by relying on honey bees as a surrogate and stated the agency was working with regulatory counterparts through the OECD to ensure the development of standardized testing methods to address this uncertainty. In that regard, the task force’s research action plan directs EPA to develop appropriate assessment tools for sublethal effects of pesticides, adjuvants, and combinations of pesticides on the fitness, development, and survival of managed and wild pollinators (i.e., honey bees and other bees). The task force’s strategy states that a metric for progress in meeting the strategy’s directives will be the extent to which standardized guidelines are developed and implemented for evaluating potential risks to bees other than honey bees. According to the strategy, these studies will be critical for determining the extent to which honey bees serve as reasonable surrogates for other species of bees. However, the strategy and the research action plan do not identify how or when EPA is to ensure that adequate test protocols are incorporated into the risk assessment process. According to EPA officials, it would not be reasonable for the strategy to dictate a timeline or for EPA to commit to one given the absence of appropriations to support the development of test guidelines. Instead, these officials said that EPA is working with the OECD and other international bodies to develop test guidelines for other species of bees. According to OECD documents, progress has been made in developing guidelines to assess the acute contact and oral toxicity of pesticides to individual bumble bees. The documents state that the results of validation testing for the guidelines (known as ring tests) are expected to be reported by late 2015 or early 2016. However, it is not clear when EPA could incorporate them into its risk assessment process, and guidelines for other bee species would take additional time to develop. Regardless, EPA has the authority under FIFRA to require pesticide registrants to submit data on the toxicity of pesticides on other bee species using methods that meet the agency’s approval. By developing a plan for obtaining data from pesticide registrants on the effects of pesticides on nonhoney bee species, including other managed or wild, native bees, into its risk assessment process, EPA could increase its confidence that it is reducing the risk of unreasonable harm to these important pollinators, consistent with the task force’s strategy and research action plan. EPA’s June 2014 risk assessment guidance calls for the agency to assess the risks that individual pesticides may pose to bees but not for the assessment of the risks from combinations of pesticide products or combinations of pesticide products with other chemicals. Farmers sometimes mix pesticide products for a single application to reduce the number of times they have to spray their fields. These combinations of pesticide products are known as tank mixtures. Beekeepers have raised concern that these mixtures of pesticide products may have synergistic effects on bees, meaning that the effect of the combination is greater than the sum of the effects of the individual pesticides. The Pollinator Stewardship Council reported on its website in 2014 that beekeepers attributed bee kill incidents to pesticides that acted in combination with each other to increase their collective toxicity. In addition, farmers may mix pesticide products with adjuvants, or chemicals to enhance the pesticides’ effectiveness. University researchers have also reported that combining certain pesticide products with other products can synergistically increase the overall toxicity to bees. Stakeholders we interviewed from commercial bee groups, universities, and conservation/ environmental groups suggested that EPA require companies to conduct toxicity studies on pesticide tank mixtures as part of its risk assessment process. According to agency officials, EPA has taken some steps to expand the scope of its risk assessment to include mixtures of pesticides, but challenges remain, as discussed below. EPA registers an individual pesticide after assessing the risks the pesticide poses to human health or the environment when used according to its directions for use. EPA also assesses the risks posed by combinations of pesticides that the applicant intends to be used as a registered combination. Otherwise, EPA does not assess the risks of tank mixtures of pesticides or combinations of pesticides and other chemicals such as adjuvants that farmers or others may use. According to EPA officials, the use restrictions that apply to tank mixtures of pesticides are, instead, based on the most restrictive elements of the individual pesticides’ labels. In EPA’s September 2012 White Paper, the agency stated that “with respect to mixtures, while multiple stressors and the interactive effects of pesticides and/or other environmental stressors are important issues, they will not be examined at this time.” However, the task force’s strategy recognized the risks that pesticide mixtures may pose and called for EPA to develop appropriate tools to assess the sublethal effects of pesticides, adjuvants, and combinations of pesticides with other products on the fitness, development, and survival of managed and wild pollinators. Senior EPA officials told us in October 2015 that they agreed that tank mixtures of registered pesticides pose potential risks to bees. However, they said that there was no reliable process for assessing mixtures and that, given the number of possible permutations that may occur in tank mixing, it was difficult to imagine how EPA could reasonably commit to such an effort. EPA officials also said that the use of tank mixes may change over time and by location as farmers respond to different pest outbreaks, and that the agency does not know how it would identify commonly used mixtures. However, according to stakeholders we interviewed, sources for data on commonly used or recommended mixtures are available. These sources include the California Department of Pesticide Regulation—which has an extensive data base on pesticide use—the pesticide industry, farmers, pesticide application companies, and extension agents. At the same time, EPA officials noted that the agency is working with the Fish and Wildlife Service and the National Marine Fisheries Service on assessing the risks of pesticides to threatened and endangered species such as salmon, including the risk posed by mixtures of pesticides. They said the agencies’ effort could eventually be relevant to EPA’s guidance for assessing pesticide risks to bees. EPA and the other agencies subsequently developed joint interim scientific approaches for assessing the risks of pesticides to threatened and endangered species. With respect to pesticide mixtures, the agencies’ document on interim approaches stated that risks associated with pesticide mixtures will largely be considered qualitatively rather than quantitatively. A related agency document states that long-term future work includes establishing a quantitative approach for assessing risks of mixtures but provides no time frames for doing so. We acknowledge that EPA’s work with other agencies on pesticide risks to threatened and endangered species may eventually contribute to its risk assessments for bees, but the effects of that work remain to be seen. By identifying the pesticide mixtures that farmers and pesticide applicators most commonly use on agricultural crops, EPA would have greater assurance that it could assess those mixtures to determine whether they pose greater risks than the sum of the risks posed by the individual pesticides. According to senior EPA officials, if the agency has information about certain combinations being used regularly, it could require that pesticide registrants provide testing data on those combinations. If an assessment of commonly-used pesticide mixtures found synergistic effects on bees, FIFRA authorizes EPA to take regulatory actions to reduce risks, such as requiring label language warning of those effects. Amendments to FIFRA require that EPA complete its reviews of all pesticide active ingredients registered as of October 1, 2007, by October 2022. Applying EPA’s new risk assessment guidance to its review of registered pesticides will add time to the posted review schedules for some individual pesticides, and EPA has not revised these schedules. As discussed, EPA’s revised risk assessment guidance for bees calls for the agency to consider requiring registrants to conduct additional studies on their pesticide’s effect on bees. According to EPA documents and officials, the agency is now applying the new guidance to registered pesticides that are in the review process, as well as to new pesticides. Deciding what studies are needed, requesting the data from registrants, waiting for the studies to be conducted, and analyzing the study data will add time to EPA’s review of some pesticides’ risks to bees. The director of EPA’s Pesticide Re-Evaluation Division and other senior officials told us in April 2015, and confirmed in October 2015, that the agency was in the process of deciding what additional bee studies, if any, will be needed for specific individual pesticides. They could not estimate how long it will take to make those decisions but said a large number of pesticides for which EPA had begun a registration review prior to issuing its risk assessment guidance in June 2014 could require data on bees. The number of pesticides affected by the new risk assessment guidance is, therefore, likely to be substantial, according to EPA officials. In its annual PRIA implementation report, EPA reported to Congress in March 2015 that by September 30, 2014, it had begun the review process for 528 pesticide cases and prepared final work plans for 491 of those cases. The final work plans identify the studies the agency is requiring the registrant to conduct and show the agency’s estimated schedule for completing a registration review. Of the 491 cases with final work plans, EPA had issued registration review decisions for 105 cases by the end of fiscal year 2014. According to EPA officials, as of September 30, 2015, the agency had increased the number of reviews begun to 612 pesticide cases, had prepared final work plans for 580 pesticides cases, and had issued 155 interim and final registration review decisions. According to the EPA division director, if EPA determines through registration review that additional data are necessary to make the necessary findings, the agency must obtain approval from the Office of Management and Budget (OMB) to request the data from registrants that use a particular active ingredient in their products. He added that, if EPA decides that registrants need to do additional studies on bees, it will need to obtain another approval from OMB for the new data. Once OMB approves the request, the required risk assessment studies on bees may take registrants from one to several years to conduct. The division director said that EPA was concerned that the number of pesticides needing new bee test data could overwhelm the supply of qualified testing laboratories, which could delay the start and completion of those studies. In its written comments on a draft of this report, EPA said that it had more recently learned that laboratories are building capacity to conduct these studies. However, the conduct of honey bee studies is confined to a limited window within the year, typically from April through August. The final work plans for most of the pesticide cases for which EPA had begun registration review were developed and posted to the www.Regulations.gov website before EPA adopted its revised risk assessment guidance for bees in June 2014. According to EPA officials, those work plans may therefore not reflect the types of studies that are now called for by the new guidelines or the estimated schedules for completing the registration reviews. Work plans that EPA posted after the June 2014 risk assessment guidance, on the other hand, may better reflect the types of studies that are called for by the new guidance. To examine the effect that EPA’s revised risk assessment guidance has had on its review of individual pesticide registrations, we selected eight registered pesticides associated with bee kill incidents reported in EPA’s EIIS database. The work plans for these pesticides (amitraz, carbaryl, chlorpyrifos, coumaphos, malathion, and three neonicotinoid pesticides— clothianidin, imidacloprid, and thiamethoxam)—could provide information on how EPA’s new risk assessment process will affect registration review, although we found the full effect is not yet clear. The director of the Pesticide Re-Evaluation Division explained that the work plans EPA has posted at www.Regulations.gov for amitraz, carbaryl, chlorpyrifos, coumaphos, and malathion are out of date because EPA has not yet decided what additional data on the effects of the pesticides on bees the agency will ask registrants to submit. However, EPA staff told us that the work plans for the three neonicotinoid pesticides—which predate the June 2014 risk assessment guidance—more closely reflect the guidance and call for additional studies on bees. EPA staff said that they were aware of the need for more bee studies for those pesticides as the agency developed its 2014 guidance. While the new guidance is likely to affect many pesticide reviews, EPA officials told us that the agency does not plan to revise the review schedules in work plans that have already been posted. The officials said that doing so would place a significant burden on agency staff and detract from their ability to conduct registration reviews. Instead, EPA officials said that the agency would annually announce for which pesticides it expected to have preliminary risk assessments available for public review in that year. In keeping with that plan, the May 2015 task force’s strategy included a list of 58 registration review preliminary risk assessments that EPA said would be open for public comments during 2015. Unlike the posted work plans for pesticides undergoing registration review, the announcement in the strategy did not estimate when the reviews of the 58 pesticides would be complete or identify what studies EPA has determined will be required. We understand that it may be challenging for agency staff to revise the review schedules in work plans that have already been posted. However, given that EPA is working to determine what studies will be required, it may soon be able to determine the studies it would require of registrants. By disclosing in its annual PRIA implementation reports which registration reviews have potentially inaccurate schedules and when it expected those reviews to be completed, EPA could provide Congress and the public with accurate information about the schedules for completing the registration reviews, thereby increasing understanding of EPA’s progress toward meeting the October 2022 deadline for completing all registration reviews. As required by FIFRA as amended by PRIA and subsequent legislation, EPA’s PRIA implementation reports contain data on the number of cases opened and closed in a particular fiscal year and cumulatively since the start of registration review in 2007. EPA has reported on its website that it expects to open 70 or more new registration review dockets annually through fiscal year 2017. Although the reports do not estimate the number of reviews EPA expects to close each year as it moves toward the 2022 deadline, the agency wrote in its fiscal year 2014 PRIA implementation report that it continued to open dockets for new registration review cases at the pace that must be maintained in order to finish reviews in 2022. EPA has estimated that the average time it will take to complete a registration review is about 6 years and that the agency has completed an average of less than 20 per year. However, the new risk assessment guidance for bees may increase the average time needed for reviews, raising questions about EPA’s ability to complete its registration reviews by 2022. EPA officials said that they are planning to assign additional agency staff to work on these registration reviews. USDA and EPA have taken numerous actions to protect the health of honey bees and other species of bees, thereby supporting agriculture and the environment. Even with these efforts, honey beekeepers continue to report rates of colony losses that they say are not economically sustainable. Although data on the size of nonhoney bee populations (other managed bees and wild, native bees) are lacking, there is concern that these bee species also need additional protection. Finding solutions to address the wide range of factors that may affect bee health, including pests, disease, reduced habitat and forage, and pesticide exposure, will be a complex undertaking that may take many years and require advances in science and changes in agricultural and land use practices. Monitoring honey bees and other bee species is critical to understanding their population status and threats to their health. The task force’s research action plan on bees and other pollinators identified monitoring of wild, native bees as a priority and directed agencies in USDA and the Department of the Interior to take leading and supporting roles. However, the research action plan did not establish a mechanism, such as a monitoring plan, that would establish participating agencies’ roles and responsibilities, establish common outcomes and goals, and obtain input from states and other stakeholders on native bees. By working with other key agency stakeholders, USDA can help agencies understand their respective roles, focus on the same goals and outcomes, and better solicit input from external stakeholders. The task force’s strategy also includes a plan for extensive research on issues important to honey bees; other managed bees; wild, native bees; and other pollinators. USDA’s ARS and NIFA have funded and continue to fund research on these three categories of bees. While the ability to identify research projects by bee category is key to tracking projects conducted to implement the task force’s research action plan, USDA’s CRIS database does not currently reflect these categories. This limitation hinders users’ ability to search for or track completed and ongoing bee research. Updating the CRIS database to include the three bee categories would increase the accessibility and availability of information about USDA-funded research on all bees. In addition, the task force’s strategy established a governmentwide goal of restoring and enhancing 7 million acres of habitat for bees and other pollinators. USDA’s NRCS and FSA are supporting efforts to improve habitat to help meet the strategy’s goal. It is not yet clear, however, how the agencies will determine which acres count toward this goal because USDA cannot currently track all acres on which conservation practices have restored or enhanced bee habitat as part of the effort to achieve the strategy’s goal. Without an improved method, USDA cannot accurately measure its contribution to the strategy’s goal. In addition, NRCS, which provides technical assistance to landowners implementing conservation practices, has conducted limited evaluation of the effectiveness of those efforts. NRCS’s National Planning Procedures Handbook calls for the agency to evaluate its conservation practices, including the technical assistance provided to landowners. According to one evaluation, agency staff need additional expertise to effectively advise landowners on how to conserve pollinator habitat. However, NRCS has not evaluated which locations have gaps or identified methods for filling the gaps. Such methods could include providing additional training or time to conduct technical assistance through which staff can learn which practices are working and which are not. By increasing the evaluation of its habitat conservation efforts to include identifying gaps in expertise and technical assistance, USDA could better ensure the effectiveness of its efforts to restore and enhance bee habitat plantings across the nation. Moreover, EPA has expanded its assessment of pesticides for their risks to honey bees. EPA generally uses data on pesticides’ risks to honey bees as a surrogate for risks to nonhoney bee species but stated that having data on those species would help meet the goal of protecting bee diversity. The task force’s research action plan calls for EPA to develop tools for assessing risks to a variety of bee species, including nonhoney bee species, such as other managed or wild, native bees. EPA is collaborating with international counterparts to develop standardized guidelines for how to study the effects of pesticides on other bee species. FIFRA authorizes EPA to require pesticide registrants to submit data from tests on nonhoney bee species using methods that meet EPA’s approval. By developing a plan for obtaining data from pesticide registrants on pesticides’ effects on nonhoney bee species until the standardized guidelines are developed, EPA could increase its confidence that it is reducing the risk of unreasonable harm to these important pollinators. Furthermore, EPA does not assess the risks that mixtures of pesticides and other chemicals may pose to bees. Depending on the chemicals involved, a mixture may pose a greater risk to bees than the sum of the risks from exposure to individual pesticides. The task force’s research action plan generally called for research on the effects mixtures of pesticides can have on bees and, in particular, directed EPA to develop appropriate assessment tools for sublethal effects of pesticides, adjuvants, and combinations of pesticides with other products on the health of managed and wild pollinators. However, EPA does not have data on commonly used mixtures and does not know how it would identify them. By identifying the mixtures that farmers and pesticide applicators most commonly use on agricultural crops, EPA would have greater assurance that it could assess those mixtures to determine whether they pose greater risks than the sum of the risks posed by the individual pesticides and, if appropriate, take regulatory action. As directed by FIFRA, EPA began a review of all pesticide active ingredients registered as of October 1, 2007, in fiscal year 2007 and is required to complete it by October 2022. EPA’s review has been affected by the changes to its risk assessment process that call for pesticide registrants to submit additional bee-related data for some pesticides. As a result, the agency’s posted schedules for reviewing the registration of pesticides may be inaccurate because the schedules do not reflect requests for additional data. However, EPA has not posted revised schedules. Accurate information about the agency’s estimated schedule would help Congress and the public better understand EPA’s progress toward meeting the October 2022 deadline for completing all registration reviews. We are making four recommendations to the Secretary of Agriculture and three recommendations to the Administrator of EPA. To improve the effectiveness of federal efforts to monitor wild, native bee populations, we recommend that the Secretary of Agriculture, as a co- chair of the White House Pollinator Health Task Force, coordinate with other Task Force agencies that have monitoring responsibilities to develop a mechanism, such as a federal monitoring plan, that would (1) establish roles and responsibilities of lead and support agencies, (2) establish shared outcomes and goals, and (3) obtain input from relevant stakeholders, such as states. To increase the accessibility and availability of information about USDA- funded research and outreach on bees, we recommend that the Secretary of Agriculture update the categories of bees in the Current Research Information System to reflect the categories of bees identified in the White House Pollinator Health Task Force’s research action plan. To measure their contribution to the White House Pollinator Health Task Force strategy’s goal to restore and enhance 7 million acres of pollinator habitat, we recommend that the Secretary of Agriculture direct the Administrators of FSA and NRCS to develop an improved method, within available resources, to track conservation program acres that contribute to the goal. To better ensure the effectiveness of USDA’s bee habitat conservation efforts, we recommend that the Secretary of Agriculture direct the Administrators of FSA and NRCS to, within available resources, increase evaluation of the effectiveness of their efforts to restore and enhance bee habitat plantings across the nation, including identifying gaps in expertise and technical assistance funding available to field offices. To better ensure that EPA is reducing the risk of unreasonable harm to important pollinators, we recommend that the Administrator of EPA direct the Office of Pesticide Programs to develop a plan for obtaining data from pesticide registrants on the effects of pesticides on nonhoney bee species, including other managed or wild, native bees. To help comply with the directive in the White House Pollinator Health Task Force’s strategy, we recommend that the Administrator of EPA direct the Office of Pesticide Programs to identify the pesticide tank mixtures that farmers and pesticide applicators most commonly use on agricultural crops to help determine whether those mixtures pose greater risks than the sum of the risks posed by the individual pesticides. To provide Congress and the public with accurate information about the schedules for completing the registration reviews for existing pesticides required under FIFRA, we recommend that the Administrator of EPA disclose in its PRIA implementation reports, or through another method of its choosing, which registration reviews have potentially inaccurate schedules and when it expects those reviews to be completed. We provided a draft of this report to USDA and EPA for review and comment. USDA and EPA provided written comments on the draft, which are presented in appendixes IV and V, respectively. In its written comments, USDA said that it agreed, in large part, with the four recommendations relevant to the department in the draft report and that progress with regard to the recommendations would improve protection for pollinators, especially bees. In its written comments, EPA said that it agreed with the three recommendations relevant to the agency in the draft report and that it has actions under way to implement the three recommendations. In its written comments, USDA described actions it has taken or could take to implement our first recommendation that the Secretary of Agriculture, as a co-chair of the White House Pollinator Health Task Force, coordinate with other task force agencies that have monitoring responsibilities to develop a mechanism, such as a federal monitoring plan, that would (1) establish roles and responsibilities of lead and support agencies, (2) establish shared outcomes and goals, and (3) obtain input from relevant stakeholders, such as states. USDA noted that while it would be impossible to monitor all of the approximately 4,000 species of bees in North America, it would be informative for agencies to survey changes in the distributions of a common set of sentinel, or indicator, bee species. The agency also described some of the monitoring methods that it plans to use or that could be used by USDA, the Department of the Interior, and other collaborators. In doing so, USDA noted that identifying native bee species can be very difficult (even to those trained in biology and museum curators) and that possible remedies will be explored, including the development of a universal field guide or apps that would facilitate bee identification efforts. USDA also described steps that it plans to take to implement our second recommendation that the Secretary of Agriculture update the categories of bees in CRIS to reflect categories of bees identified in the White House Task Force’s research action plan. USDA states that the discrepancy between the government-wide effort and current classifications needs to be reconciled to capture efforts of research, education, and extension projects as they work to address threats to bee health. While USDA states that the CRIS categories can be changed relatively quickly, it also states that the efficacy of the changes varies, depending on whether they are made for historical project data or for future project reports. USDA describes the additional staff time needed to analyze and recode projects manually in CRIS and that adding new classifications would affect current projects and would require analysis to determine if changes will affect trend reporting of the budget. USDA also states that a strategy will be needed to increase awareness of the new classifications for project directors and other scientists who may choose to change to the more specific bee classifications for their projects. The agency then describes the process by which changes are made to research classifications in CRIS, saying that if the CRIS Classification Board approves changes to CRIS when it meets in the spring of 2016, NIFA would address relevant changes at that time. USDA generally agreed with our third recommendation that the Secretary of Agriculture direct the Administrators of FSA and NRCS to develop an improved method, within available resources, to track conservation program acres that contribute to the goal of restoring and enhancing habitat for pollinators. USDA said that since November 2015, FSA has had a method for estimating acres of pollinator habitat associated with Conservation Reserve Program practices. In addition, according to USDA, NRCS is exploring options to develop a method for tracking acres on which conservation practices are planned and applied to benefit pollinators. USDA generally agreed with our fourth recommendation that the Secretary of Agriculture direct the Administrators of FSA and NRCS to, within available resources, increase evaluation of the effectiveness of their efforts to restore and enhance bee habitat plantings across the nation, including gaps in expertise and technical assistance funding available to field offices. USDA said that it would expand and deepen its studies on the impact of conservation cover on honey bee and other pollinator health, diversity, and abundance as its budget allows. EPA agreed with our first recommendation that the Office of Pesticide Programs develop a plan for obtaining data from pesticide registrants on the effects of pesticides on nonhoney bee species, including other managed or wild, native bees. In addition, EPA described actions that it is taking in collaboration with other parties to develop methods for testing the effects of pesticides on nonhoney bee species. We also noted many of these actions in the report. EPA agreed with our second recommendation that the Office of Pesticide Programs identify pesticide mixtures that farmers and pesticide applicators most commonly use on agricultural crops to help determine whether those mixtures pose greater risks than the sum of the risks posed by the individual pesticides. EPA noted that there is opportunity to identify some commonly used tank mixtures. At the same time, EPA commented on our use of the term “unregistered mixtures.” In our draft report, we intended for the term “unregistered mixtures” to mean combinations of registered pesticides that EPA has not registered for use in combination. However, we agree with EPA that the term “unregistered mixtures” might cause confusion and revised the draft, replacing that term with the term “tank mixtures.” EPA agreed with our third recommendation that the agency provide Congress and the public with accurate information about the schedules for completing the registration reviews for existing pesticides required under FIFRA. However, rather than agreeing to disclose this information in its PRIA implementation reports, EPA committed to creating a public website containing this information by April 2016. We agree that a public website could be a suitable method for accomplishing the intent of our recommendation. USDA and EPA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Administrator of EPA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report examines (1) the bee-related monitoring, research and information dissemination, and conservation efforts of selected U.S Department of Agriculture (USDA) agencies and (2) the Environmental Protection Agency’s (EPA) efforts to protect bees through its regulation of pesticides. To examine USDA’s monitoring, research and outreach, and conservation efforts with respect to bees, we focused on the National Agricultural Statistics Service (NASS), which surveys honey beekeepers; the Agricultural Research Service (ARS) and National Institute of Food and Agriculture (NIFA), which are the two largest USDA research agencies; and the Natural Resources Conservation Service (NRCS) and Farm Service Agency (FSA), which oversee conservation programs. To examine bee monitoring activities, we analyzed the methodology that NASS and the Bee Informed Partnership are using for their monitoring efforts related to their surveys of honey bee colony losses. We also reviewed the White House Task Force plans for wild, native bee monitoring by a variety of federal agencies to determine whether a means of federal coordination had been established. We also reviewed our prior body of work on interagency collaboration, as agencies within USDA carry out work related to bee monitoring in conjunction with other agencies; from that work, we selected practices that were related to challenges that we or agency officials identified and used the practices to assess interagency collaboration at USDA concerning bee monitoring. In addition, we reviewed ARS and NIFA documents related to monitoring projects and interviewed ARS and U.S. Geological Survey officials and university researchers participating in monitoring projects. To examine bee-related research and outreach, we analyzed USDA project funding data for ARS and NIFA for fiscal years 2008 through 2015 and for fiscal years 2008 through 2014, respectively, to identify the types of bees addressed by the projects. We selected fiscal year 2008 as the starting point to reflect 2008 Farm Bill initiatives; data from fiscal years 2015 and 2014 were the most recent data available for ARS and NIFA, respectively. We evaluated the reliability of these data by comparing agency-provided data with data found in USDA’s website for its Current Research Information System (CRIS) and reviewing the agencies’ management controls to ensure the data’s reliability. We determined that the data are sufficiently reliable for the purposes of this report. We also reviewed how ARS and NIFA categorize research data in USDA’s CRIS database and compared the CRIS categories to those used in the task force strategy and research action plan. We interviewed ARS and NIFA officials in headquarters and in three bee laboratories regarding research and outreach projects being conducted and the usefulness of the CRIS bee categories. To examine bee-related activities in two key USDA agencies with conservation programs, we collected data from NRCS and FSA on bee habitat acres established in 2014 and 2015 for two honey bee initiatives and associated agency funding. We evaluated the reliability of these data by reviewing the agencies’ management controls for the systems maintaining the data to ensure the data were sufficiently reliable for the purposes of this report. We also reviewed NRCS and FSA guidance and other documents on bee habitat, as well as evaluations of the NRCS technical assistance efforts. In particular, we reviewed an evaluation by the Pollinator Partnership of NRCS’s technical assistance efforts and examined the agency’s response to conclusions about the level of bee habitat conservation expertise within the agency. We interviewed FSA and NRCS officials to discuss strengths and weaknesses of their pollinator habitat efforts, particularly related to evaluation and technical assistance. To examine EPA’s efforts to protect bees, we gathered information on its regulation of pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). In particular, we obtained documents from, and conducted interviews with, officials in EPA’s Office of Pesticide Programs (OPP). OPP carries out EPA’s responsibilities for regulating the manufacture and use of all pesticides (including insecticides, herbicides, rodenticides, disinfectants, sanitizers, and more) in the United States. Specifically, we reviewed EPA’s decisions in 2014 to modify the labels of pesticide products containing neonicotinoid active ingredients. We also reviewed EPA’s 2015 proposal to modify the labels of pesticides the agency has determined to be acutely toxic to bees. We also gathered information about pesticides that have been associated with bee kill incidents from 1974 through 2014, as indicated by reports in EPA’s Ecological Incident Information System (EIIS). To assess the reliability of the EIIS data, we discussed with EPA officials the methods by which the agency collects and assesses the EIIS data and determined that, while they had limitations, they were sufficiently reliable for the purpose of identifying pesticides potentially associated with bee kills. Furthermore, we reviewed documents and interviewed agency officials regarding EPA’s efforts to encourage states to develop voluntary “managed pollinator protection plans.” In addition, we reviewed the agency’s 2011 interim and 2014 final guidance for assessing the risks that pesticides pose to bees and examined how the agency has applied the new guidance to particular pesticides. We also reviewed an EPA “White Paper” on risk assessment the agency submitted to the FIFRA Scientific Advisory Panel for comment, as well as the panel response. To learn more about how the agency has used its 2014 risk assessment guidance when reviewing the registration of existing pesticides, we selected 10 pesticides shown by EPA’s EIIS database to be associated with bee kills. When EPA receives reports of bee kill incidents, according to agency officials, it considers the evidence provided and categorizes the likelihood that a specific pesticide was associated with the bee kill as highly probable, probable, possible, unlikely, or unrelated. We assigned to those certainties a score of 4, 3, 2, 1, or 0, respectively, and multiplied the number of incidents for each pesticide by the certainty score. Using the product of those calculations, we identified the 10 pesticides associated with the largest number of bee kill incidents and weighted by EPA’s degree of certainty. The 10 pesticides, in alphabetical order, are amitraz, carbaryl, chlorpyrifos, clothianidin, coumaphos, imidacloprid, malathion, methyl parathion, parathion, and thiamethoxam. However, 2 of the pesticides, parathion and methyl parathion, have been cancelled by their registrants and, therefore, are no longer subject to EPA’s registration review process. For the remaining 8 pesticides, we reviewed EPA’s final work plans and other documents related to the agency’s registration review process and interviewed agency officials to determine what effect the new risk assessment guidance had on the registration review process. We reviewed data and interviewed agency officials about the status of EPA’s pesticide registration and registration review programs. The data included the number of pesticide “cases” for which EPA had started the registration review process from the beginning of fiscal year 2007 through the end of fiscal year 2015, the number of cases with final work plans completed, and the number of case reviews that EPA has completed. We selected these time frames because EPA began the registration review process required by FIFRA in fiscal year 2007, and the most recent data available from the agency were through the end of fiscal year 2015. To assess the reliability of the data on registration reviews provided directly to us by EPA’s OPP, we compared them to data in EPA implementation reports to Congress required by FIFRA and found them sufficiently reliable for our reporting purposes. To address both objectives, we gathered stakeholders’ views on what efforts, if any, USDA and EPA could take to protect bee health. Specifically, we interviewed stakeholders from the following types of organizations or entities: general farming, including conventional and organic farming; commodity farmers whose crops are pollinated by managed bees; commercial beekeepers; pesticide manufacturers; state governments; universities; and conservation/environmental protection. We developed a list of candidate stakeholders by asking for suggestions from knowledgeable federal officials and others knowledgeable about bee health and through our review of relevant literature. USDA and EPA officials reviewed our list of candidate stakeholders and made suggestions. We also obtained advice from a member of the National Academy of Sciences with extensive experience on bee and pollinator research about how to achieve a balanced list of stakeholders with varied expertise and knowledge. Appendix II presents a summary of stakeholders’ views on USDA and EPA efforts to protect bees. We conducted 35 interviews with stakeholders. A total of 50 individuals participated in the interviews because, in some instances, more than one person represented a stakeholder organization. See appendix III for the names of the individuals we interviewed, their title, affiliation, and type of stakeholder organization. To ensure we asked consistent questions among all the identified stakeholders, we developed an interview instrument that included questions about the stakeholders’ expertise and experience regarding bees, their knowledge of relevant USDA and EPA activities to protect bee health, and their views on suggestions for efforts, if any, (1) USDA’s ARS, NIFA, or NRCS should make with regard to bee-related research and information dissemination; (2) other USDA agencies should make to protect bee health; or (3) EPA should make to protect bee health. With the exception of the university research scientists, the stakeholders represented their organizations’ views. After completing the interviews, we conducted a content analysis of the stakeholders’ responses, whereby we organized their comments into relevant categories. Because we used a nonprobability sample of stakeholders, their views cannot be generalized to all such stakeholder organizations but can be illustrative. In addition, the views expressed by the stakeholders do not represent the views of GAO. Further, we did not assess the validity of the stakeholders’ views on what efforts USDA and EPA should make to protect bee health. We conducted this performance audit from October 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents stakeholders’ views regarding suggested efforts the U.S. Department of Agriculture (USDA) and Environmental Protection Agency (EPA) should make to further protect bee health. Stakeholders provided these views in interviews. Specifically, we interviewed a nonprobability sample of stakeholders from 35 of the following types of organizations or entities: general farming, including conventional and organic farming; commodity farmers whose crops are pollinated by managed bees; commercial beekeeping; pesticide manufacturing; state government; university research; and conservation/environmental protection. In our interviews, we asked stakeholders for their familiarity with agency efforts to protect bee health as well as for their views on suggestions for any efforts the agencies should make to further protect bee health. The information in table 1 provides a summary of stakeholders’ views on commonly-cited topics and indicates the types of stakeholder groups that expressed those views. The IPM Institute of North America, Inc. Assistant Extension/Research Professor of Entomology Commercial Director of Beeologics Monsanto Company Project Apis m. Assistant Professor of Entomology University of Maryland Board Member and Past President Western Alfalfa Seed Growers Project Apis m. In addition to the individual named above, Anne K. Johnson, (Assistant Director), Kevin Bray, Ross Campbell, John Delicath, Ashley Hess, Meredith Lilley, Beverly Peterson, and Leigh White made key contributions to this report. Barbara El Osta, Karen Howard, Ying Long, Perry Lusk, Jr, Anne Rhodes-Kline, Dan Royer, Kiki Theodoropoulos, and Walter Vance also made important contributions to this report.
Honey bees and other managed and wild, native bees provide valuable pollination services to agriculture worth billions of dollars to farmers. Government and university researchers have documented declines in some populations of bee species, with an average of about 29 percent of honey bee colonies dying each winter since 2006. A June 2014 presidential memorandum on pollinators established the White House Pollinator Health Task Force, comprising more than a dozen federal agencies, including USDA and EPA. GAO was asked to review efforts to protect bee health. This report examines (1) selected USDA agencies' bee-related monitoring, research and outreach, as well as conservation efforts, and (2) EPA's efforts to protect bees through its regulation of pesticides. GAO reviewed the White House Task Force's national strategy and research action plan, analyzed data on USDA research funding for fiscal years 2008 through 2015, reviewed EPA's guidance for assessing pesticides' risks to bees, and interviewed agency officials and stakeholders from various groups including beekeepers and pesticide manufacturing companies. The U.S. Department of Agriculture (USDA) conducts monitoring, research and outreach, and conservation that help protect bees, but limitations in those efforts hamper the department's ability to protect bee health. For example, USDA has increased monitoring of honey bee colonies managed by beekeepers to better estimate losses nationwide but does not have a mechanism in place to coordinate the monitoring of wild, native bees that the White House Pollinator Health Task Force's May 2015 strategy directs USDA and other federal agencies to conduct. Wild, native bees, which also pollinate crops, are not managed by beekeepers and are not as well studied. USDA officials said they had not coordinated with other agencies to develop a plan for monitoring wild, native bees because they were focused on other priorities. Previous GAO work has identified key practices that can enhance collaboration among agencies, such as clearly defining roles and responsibilities. By developing a mechanism, such as a monitoring plan for wild, native bees that establishes agencies' roles and responsibilities, there is better assurance that federal efforts to monitor bee populations will be coordinated and effective. Senior USDA officials agreed that increased collaboration would improve federal monitoring efforts. USDA also conducts and funds research and outreach on the health of different categories of bee species, including honey bees and, to a lesser extent, other managed bees and wild, native bees. Consistent with the task force strategy and the 2008 Farm Bill, USDA has increased its conservation efforts on private lands to restore and enhance habitat for bees but has conducted limited evaluations of the effectiveness of those efforts. For example, a USDA-contracted 2014 evaluation found that agency staff needed additional expertise on how to implement effective habitat conservation practices, but USDA has not defined those needs through additional evaluation. By evaluating gaps in expertise, USDA could better ensure the effectiveness of its efforts to restore and enhance bee habitat plantings across the nation. USDA officials said that increased evaluation would be helpful in identifying where gaps in expertise occur. The Environmental Protection Agency (EPA) has taken steps to protect honey bees and other bees from risks posed by pesticides, including revising the label requirements for certain pesticides, encouraging beekeepers and others to report bee deaths potentially associated with pesticides, and urging state and tribal governments to voluntarily develop plans to work with farmers and beekeepers to protect bees. EPA also issued guidance in 2014 that expanded the agency's approach to assessing the risk that new and existing pesticides pose to bees. The task force strategy also calls for EPA to develop tools to assess the risks posed by mixtures of pesticide products. EPA officials agreed that such mixtures may pose risks to bees but said that EPA does not have data on commonly used mixtures and does not know how it would identify them. According to stakeholders GAO interviewed, sources for data on commonly used or recommended mixtures are available and could be collected from farmers, pesticide manufacturers, and others. By identifying the pesticide mixtures that farmers most commonly use on crops, EPA would have greater assurance that it could assess those mixtures to determine whether they pose greater risks than the sum of the risks posed by individual pesticides. GAO recommends, among other things, that USDA coordinate with other agencies to develop a plan to monitor wild, native bees, and evaluate gaps in staff expertise in conservation practices, and that EPA identify the most common mixtures of pesticides used on crops. USDA and EPA generally agreed with the recommendations.
Recent statistics from CDC show that many high school students engage in sexual behavior that places them at risk for unintended pregnancy and STDs. In 2005, 46.8 percent of high school students reported that they have ever had sexual intercourse, with 14.3 percent of students reporting that they had had sexual intercourse with four or more persons. The likelihood of ever having sexual intercourse varied by grade, with the highest rate among 12th grade students (63.1 percent) and the lowest rate among 9th grade students (34.3 percent). CDC also has reported that the prevalence of certain STDs—including the rate of chlamydia infection, the most frequently reported STD in the United States—peaks in adolescence and young adulthood. According to CDC, in 2004 the chlamydia rates among adolescents 15 to 19 years old (1,579 cases per 100,000 adolescents) and young adults 20 to 24 years old (1,660 cases per 100,000) were each more than twice the rates among all other age groups. HHS’s current strategic plan includes the objectives to reduce the incidence of STDs and unintended pregnancies and to promote family formation and healthy marriages. These two objectives support HHS’s goals to reduce the major threats to the health and well-being of Americans and to improve the stability and healthy development of American children and youth, respectively. Abstinence-until-marriage education programs are one of several types of programs that support these objectives. The three main federal abstinence-until-marriage education programs— the State Program, the Community-Based Program, and the AFL Program—provide grants to support the recipients’ own efforts to provide abstinence-until-marriage education at the local level. These programs must comply with the statutory definition of abstinence education (see table 1). The State Program, administered by ACF, provides funding to its grantees—states—for the provision of abstinence-until-marriage education to those most likely to have children outside of marriage. States that receive grants through the State Program have discretion in how they use their funding to provide abstinence-until-marriage education. Some require that organizations apply for funds and use them to administer abstinence- until-marriage education programs. Others may directly administer such programs. At their discretion, states may also provide mentoring, counseling, and adult supervision to adolescents to promote abstinence from sexual activity until marriage. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 established the State Program, and states were awarded grants beginning in fiscal year 1998. Funds are allotted to each state that submits the required annual application based on the ratio of the number of low-income children in the state to the total number of low-income children in all states. States are required to match every $4 they receive in federal money with $3 of nonfederal money and are required to report annually on the performance of the abstinence-until-marriage education programs that they support or administer. In fiscal year 2005, 47 states, the District of Columbia, and 3 insular areas were awarded funding. The Community-Based Program, which is also administered by ACF, is focused on funding public and private entities that provide abstinence- until-marriage education for adolescents from 12 to 18 years old, with the purpose of creating an environment within communities that supports adolescent decisions to postpone sexual activity until marriage. The Community-Based Program provides grants for school-based programs, adult and peer mentoring, and parent education groups. The Community- Based Program first awarded grants in fiscal year 2001. Grantees of the Community-Based Program are selected through a competitive process and are evaluated according to several criteria, such as the extent to which they have demonstrated that a need exists for abstinence-until-marriage education for a targeted population or in a specific geographic location. Grantees are required to report to ACF, on a semiannual basis, on the performance of their programs. For fiscal year 2005, 63 grants were awarded to organizations and other entities. The AFL Program supports programs that provide abstinence-until- marriage education. The primary purpose of these programs is to find effective means of reaching preadolescents and adolescents before they become sexually active in order to encourage them to abstain from sexual activity and other risky behaviors. Under the AFL Program, OPA awards competitive grants to public or private nonprofit organizations or agencies, including community-based and faith-based organizations, to facilitate abstinence-until-marriage education in a variety of settings, including schools and community centers. Established in 1981, the AFL Program began awarding grants in fiscal year 1982. AFL Program grantees include school districts, youth development groups, and medical centers. Grant applicants are evaluated based on several criteria, such as the extent to which they provide a clear statement of mission, goals, measurable objectives, and a reasonable method for achieving their objectives. Grantees are required to conduct evaluations of certain aspects of their programs and report annually on their performance. As of August 2006, OPA funded 58 abstinence-until-marriage education programs, and most of these were focused on reaching young adolescents from the ages of 9 to 14. Funding provided by HHS for abstinence-until-marriage education programs has increased steadily since 2001 (see table 2). For the three main programs combined—the State Program, the Community-Based Program, and the AFL Program—the amount of agency funding increased from about $73 million in fiscal year 2001 to about $158 million in fiscal year 2005. Nearly all of this increase was for the Community-Based program; funding under this program increased by about $84 million from fiscal years 2001 through 2005. In fiscal year 2005, agency funding for the Community-Based Program constituted the largest share of the total funding (about 66 percent) for the three main programs combined. Within each of the three main abstinence-until-marriage education programs, the amount of individual grants varied. In fiscal year 2005, the State Program’s annual grants ranged from $57,057 to $4,777,916 and the median annual grant amount was $569,675. That same year, the Community-Based Program’s annual grants ranged from $213,276 to $800,000 and the median grant amount was $642,250. In fiscal year 2006, the AFL Program’s annual grants ranged from $95,676 to $300,000 and the median grant amount was $225,000. Five organizational units located within HHS—ACF, OPA, CDC, ASPE, and NIH—have responsibilities related to abstinence-until-marriage education. ACF and OPA administer the three main federal abstinence-until-marriage education programs. CDC supports abstinence-until-marriage education at the national, state, and local levels. CDC, ASPE, and NIH are sponsoring research on the effectiveness of abstinence-until-marriage programs. ACF is responsible for federal programs that promote the economic and social well-being of families, children, individuals, and communities. ACF administers and provides oversight of both the State Program and the Community-Based Program by, among other things, awarding grants, providing training and technical assistance to grantees, and requiring annual performance reporting from grantees. ACF has been responsible for the State Program since June 2004 and the Community-Based Program since October 2005. HRSA previously administered these programs. OPA has responsibility for advising the Secretary of HHS on a wide range of reproductive health topics, including adolescent pregnancy and family planning. The office is also responsible for administering programs that provide services for pregnant and parenting teens and prevention programs, such as abstinence-until-marriage education programs. OPA administers and provides oversight of the AFL Program by awarding grants, providing training and technical assistance to grantees, and requiring annual performance reporting from grantees. CDC is primarily responsible for the prevention and control of infectious and chronic diseases, including STDs. CDC provides funding to state and local education agencies in their efforts to support comprehensive school health education and HIV/STD prevention education programs, and CDC officials told us that some of these are focused on abstinence. CDC also provides funding to several state education agencies to implement various abstinence projects, such as collaboration-building among agencies to increase the impact of their efforts to encourage abstinence. Further, CDC develops tools to assist state and local education agencies with their health education programs. CDC provides funding to several national organizations to build the capacity of abstinence-until-marriage education providers. Organizations’ activities include, but are not limited to, the development and distribution of educational materials. CDC is also sponsoring research on the effectiveness of an abstinence-until-marriage education program. ASPE advises the Secretary of HHS in several areas, including policy development in health, human services, data, and science. ASPE is responsible for the development of policy analyses and it conducts research and evaluation studies in several areas, including the health of children and adolescents. ASPE is currently sponsoring research on the effectiveness of abstinence-until-marriage education programs. NIH is the primary federal agency that conducts and supports medical and behavioral research among various populations, including children and adolescents. NIH is currently sponsoring research on the effectiveness of abstinence-until-marriage education programs. Efforts by HHS and states to assess the scientific accuracy of materials used in abstinence-until-marriage education programs have been limited. ACF—which awards grants to two programs that account for the largest portion of federal spending on abstinence-until-marriage education—does not review its grantees’ education materials for scientific accuracy and does not require grantees of either program to review their own materials for scientific accuracy. In addition, not all states funded through the State Program have chosen to review their program materials for scientific accuracy. In contrast to ACF, OPA has reviewed the scientific accuracy of grantees’ proposed educational materials and corrected inaccuracies in these materials. There have been limited efforts to review the scientific accuracy of educational materials used in ACF’s State and Community-Based Programs—the two programs that account for the largest portion of federal spending on abstinence education. ACF does not review materials for scientific accuracy in either reviewing grant applications or in overseeing grantees’ performance. Prior to fiscal year 2006, State Program and Community-Based Program applicants were not required to submit copies of their proposed educational materials with their applications. While ACF required grantees of the Community-Based Program—but not the State Program—to submit their educational materials with their fiscal year 2006 applications, ACF officials told us that grantee applications and materials are only reviewed to ensure that they address all aspects of the scope of the Community-Based Program, such as the A-H definition of abstinence education. Further, documents provided to us by ACF indicate that the agency does not review grantees’ educational materials for scientific accuracy as a routine part of its oversight activities. In addition, ACF also does not require its grantees to review their own materials for scientific accuracy. Similarly, when HRSA was responsible for the State and Community-Based Programs, the agency did not review materials used by grantees for scientific accuracy or require grantees to review their own materials. Not all grantees of the State Program have chosen to review the scientific accuracy of their educational materials. Officials from 5 of the 10 states in our review reported that their states have chosen to conduct such reviews. Officials in these states identified a variety of reasons why their states reviewed abstinence-until-marriage educational materials, including program requirements, state education laws and guidelines, and past lawsuits, to ensure that materials used in abstinence-until-marriage programs were accurate. For example, Michigan’s Revised School Code states that materials and instruction in the sex education curricula, including information on abstinence, “shall not be medically inaccurate,” and Ohio’s fiscal year 2007 abstinence-until-marriage education program guidance states that abstinence-until-marriage educational materials “should be medically accurate in all assertions.” The five states we contacted that review abstinence-until-marriage educational materials for scientific accuracy have used a variety of approaches in their reviews. Some states contracted with medical professionals—such as nurses, gynecologists, and pediatricians—to serve as medical advisors who review program materials and use their expertise to determine what is and is not scientifically accurate. Some states have created checklists or worksheets to guide their staff conducting the review and document findings of inaccuracy or verification of a statement. All five states use medical professionals in conducting these reviews. One of the states requires that all statistics or scientific statements cited in a program’s materials are sourced to CDC or a peer-reviewed medical journal. Officials from this state told us that if statements in these materials cannot be attributed to these sources, the statements are required to be removed until citations are provided and materials are approved. Officials from this state told us they have also supplemented their review of program materials with on-site classroom observations to assess the scientific accuracy of the information presented to students. Officials from two of the five states reported that they have found inaccuracies as a result of their reviews. For example, one state official stated that because information is constantly evolving, state officials have had to correct out-of-date scientific information. In addition, this official cited an instance where materials incorrectly suggested that HIV can pass through condoms because the latex used in condoms is porous. In addition, this official provided documentation that the state has had to correct a statement indicating that when a person is infected with the human papillomavirus, the virus is “present for life” because, in almost all cases, this is not true. State officials who have identified inaccuracies told us that they informed their grantees of inaccuracies so that they could make corrections in their individual programs. One state official added that she contacted the authors of the materials to report an inaccuracy. Some of the educational materials that states have reviewed are materials that are commonly used in the Community–Based Program. Officials from four of the five states that review materials for scientific accuracy told us that they have each reviewed at least one of the five curricula most commonly used in the Community-Based Program because programs in their state were using them: Choosing the Best, WAIT Training, Sex Can Wait, A.C. Green’s Game Plan Abstinence Program, and Worth the Wait. Based on ACF documents, we found that there were 58 different curricula used by grantees of the Community-Based Program in fiscal year 2005. However, more than half of the grantees of the Community-Based Program reported using at least one of these five curricula. While there has been limited review of materials used in the State and Community-Based Programs, grantees of these programs have received some technical assistance designed to improve the scientific accuracy of their materials. For example, ACF officials reported that the agency provided a conference for grantees of the Community-Based Program in February 2006 that included a presentation focused on medical accuracy, including a discussion of state legislative proposals that would require medical accuracy in abstinence-until-marriage education, and how to identify reliable data. In addition, in 2002, HRSA awarded a contract to the National Abstinence Clearinghouse requiring, among other things, that the contractor develop and implement a program to provide medically accurate information and training to grantees of the State and Community- Based Programs. (See app. I for a description of HRSA’s process for awarding this contract). The portion of the contract that focused on providing medically accurate information to grantees was subcontracted to the Medical Institute for Sexual Health (Medical Institute), which has conducted presentations at regional educational conferences to provide grantees with medical and scientific information, such as updated information on condoms and STD transmission. The Medical Institute has also provided consultative services to grantees by responding to medical and scientific questions. In contrast to ACF, OPA reviews for scientific accuracy the educational materials used by AFL Program grantees. Specifically, OPA reviews its grantees’ proposed educational materials for scientific accuracy before they are used. Agency officials stated that they began to review these materials while litigation concerning the AFL Program was ongoing. OPA continued to review these materials as part of a 1993 settlement to this lawsuit. The settlement agreement expired in 1998, though the agency has continued to review grantees’ proposed educational materials for accuracy as a matter of policy. OPA officials told us that grant applicants submit summaries of materials they propose to use, though the materials are not reviewed for scientific accuracy until after grantees have been selected. OPA officials said that after grants are awarded, a medical education specialist (in consultation with several part-time medical experts) reviews the grantees’ printed materials and other educational media, such as videos. OPA officials explained that the medical education specialist must approve all materials before they are used. On many occasions, OPA grantees have proposed using—and therefore OPA has reviewed— materials commonly used in the Community-Based Program. For example, an OPA official told us that the agency had reviewed three of the Community-Based Program’s commonly used curricula—Choosing the Best, Sex Can Wait, and A.C. Green’s Game Plan Abstinence Program— and is also currently reviewing another curriculum commonly used by Community-Based Program grantees, WAIT Training. OPA officials stated that the medical education specialist has occasionally found and addressed inaccuracies in grantees’ proposed educational materials. OPA officials stated that these inaccuracies are often the result of information being out of date because, for example, medical and statistical information on STDs changes frequently. OPA has addressed these inaccuracies by either not approving the materials in which they appeared or correcting the materials through discussions with the grantees and, in some cases, the authors of the materials. In fiscal year 2005, OPA disapproved of a grantee using a specific pamphlet about STDs because the pamphlet contained statements about STD prevention and HIV transmission that were considered incomplete or inaccurate. For example, the pamphlet stated that there was no cure for hepatitis B, but the medical education specialist required the grantee to add that there was a preventive vaccine for hepatitis B. In addition, OPA required that a grantee correct several statements in a true/false quiz—including statements about STDs and condom use—in order for the quiz to be approved for use. For example, the medical education specialist changed a sentence from “The only 100% effective way of avoiding STDs or unwanted pregnancies is to not have sexual intercourse.” to “The only 100% effective way of avoiding STDs or unwanted pregnancies is to not have sexual intercourse and engage in other risky behaviors.” While OPA and some states have reviewed their grantees’ abstinence-until- marriage education materials for scientific accuracy, these types of reviews have the potential to affect abstinence-until-marriage education providers more broadly. Such efforts may create an incentive for authors of abstinence-until-marriage education materials to ensure they are accurate. Thus, some authors of abstinence-until-marriage education materials have recently updated materials in their curricula following reports that questioned their accuracy. For example, one of the most widely used curricula used by grantees of the Community-Based Program—WAIT Training—has been recently updated and provides the updated information on its Web site. A representative from WAIT Training stated that the company recently revised its curriculum, in part, in response to a congressional review that found inaccuracies in its abstinence-until-marriage education materials. HHS, states, and researchers have made a variety of efforts to assess the effectiveness of abstinence-until-marriage education programs; however, a number of factors limit the conclusions that can be drawn about the effectiveness of these programs. ACF and OPA have required their grantees to report on various outcomes used to measure the effectiveness of grantees’ abstinence-until-marriage education programs, though the reporting requirements for each of the three abstinence-until-marriage programs differ. In addition, to assess the effectiveness of the State and Community-Based Programs, ACF has analyzed national data on adolescent birth rates and the proportion of adolescents who report having had sexual intercourse. Other organizational units within HHS— ASPE, CDC, and NIH—are funding studies designed to assess the effectiveness of abstinence-until-marriage education programs in delaying sexual initiation, reducing pregnancy and STD rates, and reducing the frequency of sexual activity. Despite these efforts, several factors limit the conclusions that can be drawn about the effectiveness of abstinence-until- marriage education programs. Most of the efforts to evaluate the effectiveness of abstinence-until-marriage education programs that we describe in our review have not met certain minimum criteria that experts have concluded are necessary in order for assessments of program effectiveness to be scientifically valid, in part because such designs can be expensive and time-consuming to carry out. In addition, the results of some efforts that meet the criteria of a scientifically valid assessment have varied, and two key studies that meet these criteria have not yet been completed. Efforts of HHS, states, and researchers to assess the effectiveness of abstinence-until-marriage education programs have included ACF and OPA requiring grantees to report data on outcomes of their abstinence- until-marriage education programs; ACF analyzing national data on adolescent behavior and birth rates; and other HHS agencies, states, and researchers funding or conducting studies to assess the effectiveness of abstinence-until-marriage education programs. ACF has made efforts to assess the effectiveness of abstinence-until- marriage education programs funded by the State Program and the Community-Based Program. One of ACF’s efforts has been to require grantees of both programs to report data on outcomes, though the two programs have different requirements for the outcomes grantees must report. For the State Program, as of fiscal year 2006, grantees must report annually on four measures of the prevalence of adolescent sexual behavior in their states, such as the rate of pregnancy among adolescents aged 15 to 17 years, and compare these data to program targets over 5 years. To report on these four measures, states may choose the data sources they will use. States must also develop and report on two additional performance measures that are related to the goals of their programs. (See table 3 for a list of ACF’s fiscal year 2006 reporting requirements for the State Program.) As of fiscal year 2006, ACF requires Community-Based Program grantees to develop and report on outcome measures designed to demonstrate the extent to which grantees’ community-based abstinence education programs are accomplishing their program goals. ACF requires grantees of the Community-Based Program to contract with third-party evaluators, who are responsible for both helping grantees develop the outcome measures and monitoring grantee performance against the measures, but because this is a new requirement established for fiscal year 2006 grantees, ACF has not yet received the results of these evaluations. In addition to outcome reporting, ACF requires grantees of the Community-Based Program to report on program “outputs,” which measure the quantity of program activities and other deliverables, such as the number of participants who are served by the abstinence-until-marriage education programs. According to ACF officials, the agency requires grantees of both the State Program and the Community-Based Program to report on program outcomes in order to monitor grantees’ performance, target training, and technical assistance, and help grantees improve service delivery. (See table 3 for a list of ACF’s fiscal year 2006 reporting requirements for the Community-Based Program.) ACF’s fiscal year 2006 reporting requirements for grantees of the State Program are the same as HRSA’s when it administered the State Program. In contrast, ACF’s fiscal year 2006 reporting requirements for the Community-Based Program differ from HRSA’s reporting requirements for the program. For example, for Community-Based Program grants awarded in fiscal year 2001, HRSA required grantees to report on the effectiveness of their programs, as measured by program participation as well as behavioral and biological outcomes. These performance measures were modified for fiscal year 2002, in part HHS officials explained, because of concerns expressed by members of the abstinence-education community that the original performance measures did not accurately reflect the efforts of the grantees of the Community-Based Program. For grants awarded from fiscal years 2002 through 2004, HRSA required grantees of the Community-Based Program to report on a combination of program outputs, such as the proportion of adolescents who completed an abstinence-until-marriage education program, and measures of adolescent intentions, such as the proportion of adolescents who committed to abstaining from sexual activity until marriage. For grants awarded in fiscal year 2005, when ACF assumed responsibility for the Community- Based Program from HRSA, grantees were not required to report on any specific performance measures. OPA has also made efforts to assess the effectiveness of the AFL Program. Specifically, OPA requires grantees of the AFL Program to develop and report on outcome measures that are used to help demonstrate the extent to which grantees’ programs are having an effect on program participants. According to OPA officials, the agency recommends that grantees report on outcome measures, such as participants’ knowledge of the benefits of abstinence and their reported intentions to abstain from sexually activity, reported beliefs in their ability to remain abstinent, and reported parental involvement in their lives. To collect data on these outcome measures and any others, OPA requires all grantees funded in fiscal year 2004 and beyond to administer, at a minimum, a standardized questionnaire—developed by OPA—to their program participants, both when participants begin an abstinence-only education program and after the program’s completion. The standardized questionnaire includes questions intended to obtain information on participants’ reported involvement in extracurricular activities, behaviors linked to health risks, attitudes and intentions about abstinence, and opinions about the consequences of premarital sexual activity. Like ACF, OPA requires its grantees to contract with independent evaluators, such as colleges or universities, which are responsible for evaluating the effectiveness of grantees’ individual abstinence-until-marriage education programs. In addition to evaluating the extent to which grantees are meeting their goals, OPA officials stated that the independent evaluators may also provide input to grantees of the AFL Program on other aspects of the program to improve their service delivery. Unlike ACF, OPA requires that the third- party evaluations incorporate specific methodological characteristics, such as control groups or comparison groups and sufficient sample sizes. In addition, OPA requires that the evaluations for grantees funded in fiscal year 2004 and beyond account for baseline and follow-up data obtained from the standardized questionnaires. OPA’s requirement that grantees use a standardized set of questionnaires, with data from these questionnaires used in evaluations, differs from OPA’s previous requirements. Previously, grantees of the AFL Program were not required to use a standardized method for collecting data that could be used to assess the effectiveness of their programs; instead, grantees chose their own data collection instruments. As a result, an OPA official explained, the collected data varied from one project to another. OPA officials said that the agency developed the standardized questionnaire to ensure uniformity in the data collected and allow the agency to more effectively aggregate the data reported in evaluations of individual abstinence-until-marriage education programs. OPA officials told us that they plan to aggregate information from certain questions in the standardized set of questionnaires in order to report on certain performance measures as part of the agency’s annual performance reports. The measures include the extent of parental involvement in adolescents’ lives and the extent to which adolescents understand the benefits of abstinence. An agency official stated that the agency expects to begin receiving data from grantees that are using these questionnaires in January 2007. OPA did not previously have long-term measures of the performance of the AFL Program. Its current measures were developed in collaboration with the Office of Management and Budget (OMB) in response to an OMB review in 2004 that found that the AFL Program did not have any annual performance measures for measuring progress toward long-term goals. In addition to requiring their grantees to report on outcomes used to assess program effectiveness, both ACF and OPA have provided technical assistance and training to their grantees in order to support grantees’ own program evaluation efforts. For example, in November 2005 the two agencies sponsored an evaluation conference for abstinence-until- marriage grantees that included presentations about evaluations and their methodology. Similarly, ACF’s Office of Planning, Research, and Evaluation sponsors annual evaluation conferences, and an ACF official told us that a recent conference placed “a significant emphasis” on the evaluation of abstinence-until-marriage education programs. In addition, HHS officials told us that ACF, along with ASPE, is funding a multiyear project that is designed to identify gaps in abstinence education evaluation and technical assistance needs, develop materials on abstinence education evaluation, deliver technical assistance and capacity-building activities related to program evaluation, and develop research reports related to abstinence education. OPA officials also told us that they attempt to help ensure grantees’ progress and effectiveness by offering various technical assistance workshops and conferences. For example, in May 2006 OPA provided a 2-day training conference to its grantees on the importance of program evaluations and administering evaluation instruments. In addition, OPA officials stated that the agency contracts with evaluation consultants, who review grantees’ evaluation tools and activities. OPA officials explained that these consultants provide in-depth technical assistance to grantees on how to improve grantees’ evaluations. Requiring outcome reporting from state and community-based grantees is not ACF’s only effort to assess the effectiveness of its two programs. ACF also analyzes trends in adolescent behavior, as reflected in national data on birth rates among teens and the proportion of surveyed high school students reporting that they have had sexual intercourse. ACF uses these national data as a measure of the overall effectiveness of its State and Community-Based Programs, comparing the national data to program targets. In its annual performance reports, the agency summarizes the progress being made toward lowering the rate of births to unmarried teenage girls and the proportion of students (grades 9-12) who report having ever had sexual intercourse. ACF’s use of national data to assess the effectiveness of the State and Community-Based Programs represents a change from how HRSA assessed the overall effectiveness of these programs. Whereas ACF compares national data on adolescent behavior to program targets, HRSA aggregated data from its state and community-based grantees. HRSA’s state grantees were allowed to select the data sources used to gauge their progress against certain performance measures. For example, in its annual performance reports on the State Program, HRSA reported information on the percentage of its state grantees meeting target rates for reducing the proportion of adolescents who have engaged in sexual intercourse, the incidence of youths aged 15 to 19 who have contracted selected STDs, and the rate of births among youths aged 15 to 17. To determine their progress in meeting their target rates, some state grantees, for example, reported national data from the Youth Risk Behavior Surveillance System, while other grantees reported state-collected data. After ACF assumed responsibility for the State and Community-Based Programs from HRSA, ACF began using national data on adolescent behavior as a measure of the programs’ effectiveness. According to ACF officials, the agency changed how it assessed its programs out of concern over the quality of the data state grantees were using in their performance reporting and because the agency wanted to use parallel measures of effectiveness for both programs. For example, according to state performance reports for fiscal year 2001 that we reviewed, two reports did not include adolescent pregnancy rates that year because the states did not collect data on abortions among this population. In addition, ACF officials told us that they decided not to use national data on STDs as a measure of program effectiveness because the goal of reducing STD rates is not as central to the State and Community-Based Programs as reducing sexual activity and birth rates among teens. However, one official stated that reducing STDs is an important “by-product” of the programs. Some states have made additional efforts to assess the effectiveness of abstinence-until-marriage education programs, although they are not required by ACF. Specifically, we found that 6 of the 10 states in our review that receive funding through ACF’s State Program have made efforts to conduct evaluations of selected abstinence-until-marriage programs in their state. All 6 of the states worked with third-party evaluators, such as university researchers or private research firms, to perform the evaluations, which in general measure self-reported changes in program participants’ behavior and attitudes related to sex and abstinence as indicators of program effectiveness. To obtain this information, the third-party evaluators have typically relied on surveys administered to program participants at the start of a program, its conclusion, and during a follow-up period anywhere from 3 months to almost 3 years after the conclusion. The third-party evaluations for 4 of the 6 states in our review have been completed as of February 2006, and the results of these studies have varied. Among the 4 states that have completed third-party evaluations, 3 states require the abstinence programs in their state to measure reported changes in participants’ behavior as an indicator of program effectiveness—both at the start of the program and after its completion. The 3 states require their programs to track participants’ reported incidence of sexual intercourse. In addition, 2 states require their programs to track biological outcomes, such as pregnancies, births, or STDs. In addition, 6 of the 10 states in our review require their programs to track participants’ attitudes about abstinence and sex, such as the number of participants who make pledges to remain abstinent. Some states also provide technical assistance to the abstinence-until- marriage programs they support in their state. This assistance is designed to help programs evaluate and improve their effectiveness. Officials from 5 of the 10 states in our review either told us or provided documentation that they provide technical assistance on evaluations to abstinence programs in their state. One state official said that the abstinence-until- marriage programs supported by the state were found to be ill-prepared to conduct evaluations themselves, and that she now requires these programs to dedicate a portion of their grants to contract with a third-party or state evaluator to assist them in program-level evaluations. Officials from another state told us that they contract with a private organization of public health professionals in order to provide evaluation consultation and technical assistance for the abstinence-until-marriage programs the state supports. In addition to ACF and OPA, other organizational units within HHS have made efforts to assess the effectiveness of abstinence-until-marriage education programs. ASPE is currently sponsoring a study of the Community-Based Program and a study of the State Program. For the former program, ASPE has contracted with Abt Associates to help design the study, and an ASPE official told us that once the agency selects an appropriate design, it will competitively award a contract to conduct the study. For the latter program, ASPE has contracted with Mathematica Policy Research, Inc. (Mathematica), which is in the process of examining the impact of five programs funded through the State Program on participants’ attitudes and behaviors related to abstinence and sex. As of August 2006, Mathematica has published two reports on findings from its study—an interim report documenting the experiences of schools and communities that receive abstinence-until-marriage education funding, and a report on the first-year impacts of selected state abstinence-until- marriage education programs. Mathematica’s final report, which has not been completed, will examine the impact of the State Program on behavioral outcomes, including abstinence, sexual activity, risk of STDs, risk of pregnancy, and drug and alcohol use. An ASPE official told us that the agency expects a final report to be published in 2007. Like ASPE, CDC has made its own effort to assess the effectiveness of abstinence-until-marriage education. CDC is sponsoring a study to evaluate the effectiveness of two middle school curricula—one that complies with abstinence education program requirements and one that teaches a combination of abstinence and contraceptive information and skills. In CDC’s study, five middle schools chosen at random will receive a program consisting of abstinence-until-marriage education exclusively; five schools will receive comprehensive sex education, which also includes information on contraception; and five schools will be assigned to a control group. The study will examine the relative effectiveness of the programs on behavioral outcomes such as reported sexual risk behaviors and changes in attitudes related to abstinence and sex. CDC plans to recruit approximately 1,500 seventh grade students into its study and will follow them over a 2-year period. The agency expects to complete the study in 2009. NIH has funded studies comparing the effectiveness of education programs that focus only on abstinence with the effectiveness of sex education programs that teach both abstinence and information about contraception. As of August 2006, NIH is funding five studies, which in general are comparing the effects of these two types of programs on the sexual behavior and related attitudes among groups of either middle school or high school students. For example, in one NIH study, researchers are using groups of seventh and eighth grade adolescents to assess the impact of a variety of programs on, among other issues, adolescents’ reported sexual activities, knowledge, and beliefs. For this study, researchers are comparing these outcomes among students who received abstinence-until-marriage education; students who received a combination of abstinence and contraceptive education; and students who participated in a general health class, who serve as a comparison group. NIH expects both this study and its other four studies to be competed in 2006. In addition to the efforts of researchers working on behalf of HHS and states, other researchers—such as those affiliated with universities and various advocacy groups—have made efforts to study the effectiveness of abstinence-until-marriage education programs. This work includes studies of the outcomes of individual programs and reviews of other studies on the effectiveness of individual abstinence-until-marriage education programs. In general, research studies on the effectiveness of individual abstinence-until-marriage education programs have examined the extent to which they changed participants’ demonstrated knowledge, declared intentions, and reported behavior related to sexual activity and abstinence. For example, some studies examined the impact of abstinence-until- marriage education programs on participants’ knowledge of concepts taught in the programs, as well as participants’ declared attitudes about abstinence and teen sex. Some studies examined the impact of these programs on such outcomes as participants’ declared commitment to abstain from sex until marriage, participants’ understanding of the potential consequences of having intercourse, and participants’ reported ability to resist pressures to engage in sexual activity. Some of the studies we reviewed examined the impact of abstinence-until-marriage programs on participants’ sexual behavior, as measured, for example, by the proportion of participants who reported having had sexual intercourse and the frequency of sexual intercourse reported by participants. In general, the efforts to study and build a body of research on the effectiveness of most abstinence education programs have been under way for only a few years, in part because grants under the two programs that account for the largest portion of federal spending on abstinence education—the State Program and the Community-Based Program—were not awarded until 1998 and 2001, respectively. Most of the efforts of HHS, states, and other researchers to evaluate the effectiveness of abstinence-until-marriage education programs included in our review have not met certain minimum criteria that experts have concluded are necessary in order for assessments of program effectiveness to be scientifically valid. For example, most of the efforts included in our review did not include experimental or quasi-experimental designs, nor did they measure behavioral or biological outcomes. In addition, the results of some assessment efforts that meet the criteria of a scientifically valid assessment have varied, and two key studies that meet these criteria have not yet been completed. In an effort to better assess the merits of the studies that have been conducted on the effectiveness of sexual health programs—including abstinence-until-marriage education programs—scientific experts have developed criteria that can be used to gauge the scientific rigor of these evaluations. For example, in 2001, the National Campaign to Prevent Teen Pregnancy—an organization focused on reducing teen pregnancy— published a report by a panel of scientific experts that assessed the evidence reported on abstinence-until-marriage education programs in peer-reviewed journals and other literature. The panel developed criteria that an evaluation of a program’s effectiveness must meet in order for the program’s results to be considered scientifically valid. In addition, in 2004, former U.S. Surgeon General David Satcher convened a panel of experts to discuss, among other things, best practices for evaluating the effectiveness of sexual health education programs—including abstinence-until-marriage education programs. This panel published a report in 2006 that describes similar scientific criteria that assessments of program effectiveness need to meet in order for their results to be scientifically valid. Further, experts we interviewed agreed that these criteria are important for ensuring that the results of a study support valid conclusions. In general, these panels, as well as the experts we interviewed, agreed that scientifically valid studies of a program’s effectiveness should include the following characteristics: An experimental design that randomly assigns individuals or schools to either an intervention group or control group, or a quasi-experimental design that uses nonrandomly assigned but well-matched comparison groups. According to the panel of scientific experts convened by the National Campaign to Prevent Teen Pregnancy, experimental designs or quasi-experimental designs with well-matched comparison groups have at least three important strengths that are typically not found in other studies, such as those that use aggregated data: they evaluate specific programs with known characteristics, they can clearly distinguish between participants who did and did not receive an intervention, and they control for other factors that may affect study outcomes. Therefore, experimental and quasi-experimental study designs have a greater ability to assess the causal impact of specific programs than other types of studies. According to scientific experts, studies that include experimental or quasi- experimental designs should also collect follow-up data for a minimum number of months after subjects receive an intervention. Experts reported that follow-up periods are important in order to identify the effects of a program that are not immediately apparent or to determine whether these effects diminish over time. In addition, experts have reported that studies should have a sample size of at least 100 individuals for study results to be considered scientifically valid. Studies should assess or measure changes in biological outcomes or reported behaviors instead of attitudes or intentions. According to scientific experts, biological outcomes—such as pregnancy rates, birth rates, and STD rates—and reported behaviors—such as reported initiation and frequency of sexual activity—are better measures of the effectiveness of abstinence-until-marriage programs, because adolescent attitudes and intentions may or may not be indicative of actual behavior. For example, adolescents may report that they intend to abstain from sexual intercourse but may not actually do so. Many of the efforts by HHS, states, and other researchers that we identified in our review lack at least one of the characteristics of a scientifically valid study of program effectiveness. That is, most of the efforts to assess the effectiveness of these programs have not used experimental or quasi-experimental designs with sufficient follow-up periods and sample sizes to make their conclusions scientifically valid. For example, ACF—and before it, HRSA—used, according to ACF officials, grantee reporting on outcomes in order to monitor grantees’ performance, target training and technical assistance, and help grantees improve service delivery. However, because the outcomes reported by grantees have not been produced through experimentally or quasi-experimentally designed studies, such information cannot be causally attributed to any particular abstinence-until-marriage education program. While ACF requires its fiscal year 2006 grantees of the Community-Based Program to contract with third-party evaluators to select and monitor outcomes for their programs, ACF is not specifically requiring these grantees to use experimental or quasi-experimental designs. Therefore, it is not clear whether these evaluations will include such designs. Similarly, ACF’s use of national data on adolescent behavior and birth rates to assess its State and Community- Based Programs is of limited value because these data do not distinguish between those who participated in abstinence-until-marriage education programs and those who did not. Consequently, these national data sets, which represent state-reported vital statistics and a nationwide survey of high school students, cannot be used to causally link declines in birth rates and adolescent sexual activity to the effects of specific abstinence-until- marriage education programs. Similarly, the efforts we identified by states and researchers to assess the effectiveness of abstinence-until-marriage education programs often did not include experimental or quasi-experimental designs. None of the state evaluations we reviewed that have been completed included randomly assigned control groups. For instance, one state evaluation that we reviewed only included students who volunteered to participate in the study. This evaluation report stated that the absence of a randomly assigned control group in the evaluation did not allow the evaluators to determine whether observed changes in participants’ reported sexual behavior—as indicated through surveys administered at the beginning and end of a program—could be attributed to the abstinence-until-marriage education program. Similarly, some of the journal articles that we reviewed described studies to assess the effectiveness of abstinence-until- marriage programs that did not include experimental or quasi- experimental designs needed to support scientifically valid conclusions about the programs’ effectiveness. In these studies, researchers administered questionnaires to study participants before and after they completed an abstinence-until-marriage education program and assessed the extent to which the responses of participants changed. These studies did not compare the responses of study participants with a group that did not participate in an abstinence-until-marriage education program. In addition, some of the studies used insufficient follow-up periods, thereby limiting the conclusions that can be drawn about the effectiveness of the abstinence-until-marriage education programs being studied. For example, two journal articles that we reviewed described studies that measured the effectiveness of abstinence-until-marriage programs in delaying the initiation of sexual activity from 1 to 2 months after completion of the program. Scientific experts consider this follow-up period too short to assess whether the programs had a valid effect. According to scientific experts, HHS, states, and other researchers face a number of challenges in designing experimental or quasi-experimental studies of program effectiveness. According to these experts, experimental or quasi-experimental studies can be expensive and time-consuming to carry out, and many grantees of abstinence-until-marriage education programs have insufficient time and funding to support these types of studies. Moreover, it can be difficult for researchers assessing abstinence- until-marriage education programs to convince school districts to participate in randomized intervention and control groups, in part because of sensitivities to surveying attitudes, intentions, and behaviors related to abstinence and sex. For example, in a third-party evaluation of its program, one grantee of the State Program originally planned to administer follow-up surveys 1 year after participants finished their abstinence education program, but the evaluators decided not to conduct this follow-up because of confidentiality concerns and the difficulty of locating students. In addition, the contractors hired to design ASPE’s study of the effectiveness of the Community-Based Program have reported difficulties finding school districts that are willing to participate in randomly assigned intervention and control groups receiving either abstinence-until-marriage education or comprehensive sex education. An ASPE official told us that although a “randomized approach” is the best design for assessing the effectiveness of a program, the approach is also the most difficult to conduct. Another factor that limits the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs is the fact that most efforts in our review to study the effectiveness of these programs did not measure changes in behavioral or biological outcomes among participants. Instead, most of the efforts we identify in our review used reported intentions and attitudes in order to assess the effectiveness of abstinence-until-marriage programs. For example, neither ACF’s community-based grantees nor OPA’s AFL grantees are required to report on behavioral or biological outcomes, such as rates of intercourse or pregnancy. Similarly, the journal articles we reviewed were more likely to use reported attitudes and intentions—such as study participants’ reported attitudes about premarital sexual activity or their reported intentions to remain abstinent until marriage—rather than their reported behaviors or biological outcomes to assess the effectiveness of abstinence-until- marriage programs. For example, in one journal article we reviewed, participants were asked to rate the likelihood that they would have sexual intercourse as unmarried teenagers; another journal article described a study in which participants rated the likelihood that they would have sexual intercourse in the next year, before finishing high school, and before marriage. Experts, as well as state and HHS officials, have reported that it can be difficult to obtain scientifically valid information on biological outcomes and sexual behaviors. Specifically, experts have reported that when measuring an abstinence-until-marriage education program’s affect on biological outcomes—such as reducing pregnancy or birth rates—it is necessary to have large sample sizes in order to determine whether a small change in biological outcomes is the result of the abstinence-until- marriage education program. In addition, state and federal officials told us that they have experienced difficulties obtaining information on sexual behaviors because of the sensitive nature of the information they were trying to collect. For example, one state official told us that her state’s effort to evaluate abstinence-until-marriage education programs was only able to measure changes in participants’ reported attitudes, instead of behaviors, because the evaluators needed to obtain consent from the parents of the program participants in order to ask them about their sexual behavior. The state official explained that the requirement to obtain consent from parents raised issues of self-selection, and therefore state officials decided to ultimately halt the study and only report on the attitudes that they had measured. In another example, ACF’s fiscal year 2006 budget justification reports that ACF has had some difficulty in obtaining reliable data from state grantees, in part because questions about teenage sexual behavior are sensitive. OPA officials also acknowledged that many communities will not allow grantees to ask program participants questions about their sexual behavior because the communities believe such questions are too intrusive. One OPA official said that such restrictions affect the agency’s ability to measure behavioral outcomes, explaining that OPA cannot measure what it cannot ask about. Among the assessment efforts we identified are some studies that meet the criteria of a scientifically valid effectiveness study. However, results of these studies have varied, and this limits the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs. Some researchers have reported that abstinence-until-marriage education programs have resulted in adolescents reporting having less frequent sexual intercourse or fewer sexual partners. For example, in one study of middle school students, participants in an abstinence-until- marriage education program who had sexual intercourse during the follow-up period were 50 percent less likely to report having two or more sexual partners when compared with their nonparticipant peers. In contrast, other studies have reported that abstinence-until-marriage education programs did not affect the reported frequency of sexual intercourse or number of sexual partners. For example, one study of middle school students found that participants of an abstinence-until- marriage program were not less likely than nonparticipants at the 1 year follow-up to report less frequent sexual intercourse or fewer sexual partners. In addition to these varied findings, one study found that an abstinence-until-marriage program was effective in delaying the initiation of sexual intercourse in the short term but not long term. Experts with whom we spoke emphasized that there are still too few scientifically valid studies completed to date that can be used to determine conclusively which, if any, abstinence-until-marriage programs are effective. Additionally, among the assessment efforts we identified are some studies that experts anticipate will meet the criteria of a scientifically valid effectiveness study but are not yet completed. One of these key studies is the final Mathematica report, contracted by ASPE, on the State Program. The final report was originally slated for publication in 2005, but an ASPE official stated that the final report has been delayed until 2007 so that researchers can extend the follow-up period to improve their response rate and the reliability of the information they collect. Another key study is CDC’s research on middle school programs, which is not expected to be completed until 2009. Experts and federal officials we interviewed stated that they expect the results of these two federally funded studies to add substantively to the body of research on the effectiveness of abstinence-until-marriage education programs. One expert with whom we spoke said that she expects the final Mathematica report on participants’ behaviors to provide the groundwork for the field. Another expert we interviewed stated that the CDC study was very well-designed and she expects the results to contribute to the development of effective abstinence-until-marriage education curricula. There have been various efforts—by HHS, states, and others—to assess the scientific accuracy of educational materials used in abstinence-until- marriage education programs and the effectiveness of these programs. However, efforts to evaluate both the accuracy and effectiveness of abstinence-until-marriage education programs have been, in various ways, limited. ACF, which administers the two programs that account for the largest portion of federal spending on abstinence-until-marriage education, does not review or require its grantees to review program materials for scientific accuracy. In addition, not all grantees of the State Program have chosen to review their materials. Because of these limitations, ACF cannot be assured that the materials used in its State and Community-Based Programs are accurate. Moreover, OPA, which reviews all grantees’ proposed abstinence-until-marriage educational materials, and states that review educational materials have found inaccuracies in some educational materials used by abstinence-until-marriage programs. Similarly, most of the efforts described in our review to assess the effectiveness of abstinence-until-marriage programs have not met minimum scientific criteria needed to draw valid conclusions about their effectiveness. Specifically, most efforts by agencies, states, and other researchers have not included experimental or quasi-experimental designs that can establish whether changes in behaviors or biological outcomes can be causally linked to specific abstinence-until-marriage education programs. While these types of studies are time-consuming and expensive, experts said that they are the only definitive way to draw valid conclusions about the effectiveness of these programs. In addition, among the assessment efforts we identified are some studies funded by HHS that experts anticipate will meet the criteria of a scientifically valid effectiveness study but are not yet completed. When completed, these HHS-funded studies may add substantively to the body of research on the effectiveness of abstinence-until-marriage education programs. To address concerns about the scientific accuracy of materials used in abstinence-until-marriage education programs, we recommend that the Secretary of HHS develop procedures to help assure the accuracy of such materials used in the State and Community-Based Programs. To help provide such assurances, the Secretary could consider alternatives such as (1) extending the approach currently used by OPA to review the scientific accuracy of the factual statements included in abstinence-until-marriage education to materials used by grantees of ACF’s Community-Based Program and requiring grantees of ACF’s State Program to conduct such reviews or (2) requiring grantees of both programs to sign written assurances in their grant applications that the materials they propose using are accurate. HHS provided written comments on a draft of this report. (See app. III.) In its written comments, HHS stated that it will consider requiring grantees of both ACF programs to sign written assurances in grant applications that the materials they use are accurate. Regarding accuracy, HHS’s written comments also noted that all applicants for federal assistance attest on the application form--Standard Form 424--that all data in their applications are “true and correct,” and that in the view of HHS, this applies to information presented in curricula funded by federal grants. However, as we stated in the draft report, grantees of the State Program are not required to submit curricula as a part of their applications; therefore, the attestation in Standard Form 424 would not apply to curricula used by those grantees. In addition, as stated in the draft report, some states have reviewed materials used in abstinence-until-marriage education programs, but these reviews occurred after they received funding from ACF. Further, while grantees of the Community-Based Program were required to submit copies of their curricula and a Standard Form 424 in fiscal year 2006 as part of their applications, none of the materials specifically require an assurance of scientific accuracy. Further, OPA and states have found inaccuracies in educational materials used by abstinence-until-marriage programs. HHS’s written comments also stated that ACF requires that curricula conform to HHS’s standards grounded in scientific literature. HHS’s comments refer to the curriculum standards for this program that detail what types of information must be included in abstinence-until-marriage curricula, and the comments stated that the curricula must provide supporting references for this information. Further, HHS’s comments stated that ACF staff review the curricula to ensure compliance with these standards. The draft report stated this. However, a requirement that curricula include certain types of information does not necessarily ensure the accuracy of the scientific facts included in the abstinence-until- marriage materials. For example, while education materials may include information on failure rates associated with contraceptives or STD infections, this information may be outdated or otherwise inaccurate or incomplete. HHS’s written comments also stated that if it finds inaccurate statements during the review process or at any time during the grant period, ACF works with grantees to take corrective action. To ensure completeness, we have added this statement to the report. Further, HHS stated that 2 inaccuracies cited in the draft report had been corrected before our work began. We believe HHS is referring to inaccuracies identified by OPA during its review of materials for scientific accuracy and this reinforces the need for review of materials used by ACF’s grantees. As HHS noted in its written comments, we did not define the term scientific accuracy. HHS stated that it disagreed with certain findings of the report because it was difficult to precisely determine the criteria we employed in making the recommendation as to scientific accuracy. As we stated in the scope and methodology section of the draft report, the objective of our work was to focus on efforts by HHS and states to review the accuracy of scientific facts included in abstinence-until-marriage education materials. Performing an independent assessment of the criteria used by these entities to determine the scientific accuracy of education materials or the quality of the reviews was beyond the scope of the work. Regarding effectiveness, HHS’s written comments also described a number of actions it is taking to determine program effectiveness and improve the quality of programs and research. Specifically, HHS’s comments described (1) studies undertaken or funded by ASPE, CDC, and NIH; (2) technical assistance provided by OPA and ACF; (3) grantee evaluation requirements; and (4) ACF and OPA requirements for the amount of grant funds to be spent on evaluations. All of this information was included in our draft report. HHS’s comments also described a new effort funded by ACF and ASPE that is designed to build capacity for quality research in the field of abstinence education. We added information on this effort to the report. HHS’s written comments also describe evaluations that resulted from an Abstinence Education Evaluation Conference sponsored by ACF and OPA. While this conference was described in the draft report, we added more detail regarding the content of the conference. HHS’s written comments also describe OPA’s efforts to assess the effectiveness of the AFL Program. We had included this information in the draft report. HHS’s written comments stated that it may be too soon to draw conclusions about the effectiveness of ACF’s and OPA’s programs, in part, because key studies have not been completed. We agree and discussed this in the draft report. As we noted in the draft report, key studies funded by HHS that experts anticipate will meet the criteria of a scientifically valid effectiveness study are not yet completed, but when completed these HHS funded studies may add substantively to the body of research on the effectiveness of abstinence-until-marriage education programs. In addition, the comments stated that having an inadequate amount of scientifically valid and conclusive evaluation studies is not unique to abstinence-until-marriage education programs, and a recent ASPE review of comprehensive sex education programs found mixed results on their effectiveness. However, the scope of our report was focused on abstinence-until-marriage education programs, and we did not review comprehensive sex education programs or make any comparisons between the two types of programs. HHS also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies of this report to the Secretary of HHS and to other interested parties. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-3407 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The Health Resources and Services Administration (HRSA) awarded a contract to the National Abstinence Clearinghouse (NAC) in 2002 to provide assistance with its Community-Based Abstinence Education Program (Community-Based Program) and Abstinence Education Program (State Program). NAC is a nonprofit educational organization whose mission is to promote the appreciation for and practice of sexual abstinence until marriage through the distribution of age appropriate, factual, and medically referenced materials. The purpose of the contract was (1) to develop national criteria for the review of abstinence-until- marriage educational materials and to create a directory of approved materials; (2) to provide medical accuracy training to grantees; and (3) to provide technical support to grantees, such as assistance with program evaluation. We are reporting on the steps that HRSA took to award the contract to NAC in response to concerns that have been raised by a congressional requester. In general, these concerns centered on the extent to which the selection process was competitive and whether HRSA identified the potential for an organizational conflict of interest. HRSA awarded the contract to address three concerns it had with the Community-Based Program during 2001, the first year of its implementation. First, HRSA officials needed guidance to determine whether abstinence-until-marriage education materials conformed to the definitional requirements of the Social Security Act. Second, many grantees lacked the medical background and training to ensure that they would provide medically accurate, science-based information in their programs. Third, grantees also lacked experience with the technical management of federal grants, including how to conduct evaluations of their programs. HRSA used full and open competition procedures to award the contract to NAC. In doing so, HRSA (1) publicly solicited proposals from potential contractors; (2) conducted technical evaluations of both the original proposals and the revised proposals for those considered to be in the competitive range; and (3) determined that NAC’s proposal represented the best overall value to the government. This process, which took place from May 2002 through September 2002, resulted in HRSA awarding NAC the contract with a potential value of nearly $2.7 million. HRSA issued a notice on May 20, 2002, on the FedBizOpps Web site, the government point of entry for notifying potential contractors of federal contract opportunities, indicating its intent to publicly request proposals from prospective contractors in June 2002. On June 20, 2002, HRSA posted the solicitation on the FedBizOpps Web site indicating that the abstinence contract would be awarded using full and open competition procedures, that is, all responsible prospective contractors would be provided the opportunity to compete. The solicitation, which was a Request for Proposals (RFP), described the contract objectives, which included (1) the development of national criteria for the review of abstinence-until-marriage educational materials and the development of a directory of approved materials; (2) the provision of medical accuracy training to grantees; and (3) the provision of technical support to grantees, such as assistance with program evaluation. The RFP stated that HRSA intended to award a cost-reimbursement contract with fixed fee for a 1-year base period and 2 option years. This was a best value procurement; that is, HRSA reserved the right in the RFP to select for award the proposal that HRSA determined offered the best value to the government, even if it did not offer the lowest cost. Further, the RFP stated that the technical evaluation of the prospective contractors’ proposals would receive paramount consideration in the selection of the contractor. According to the RFP, this evaluation would include an assessment of the prospective contractor’s technical approach, the organizational experience and expertise of the prospective contractor, the plans for personnel and management of the work, and the prospective contractor’s statement and understanding of the project purpose. Other factors, such as the estimated cost, past performance under other contracts for similar services, and the subcontracting plan would also be considered in the selection process. Five prospective contractors submitted proposals to HRSA by July 31, 2002, when proposals were due. HRSA established a review committee to conduct the technical evaluation of the five proposals. This committee included three voting members and a nonvoting chairperson. The Director of HRSA’s Community-Based and State Programs and two analysts from other programs within the Department of Health and Human Services (HHS) served as the voting members, and the chairperson of the review committee was a project officer of HRSA’s Community-Based Program. The committee members conducted the technical evaluation of the proposals, according to the criteria in the RFP, as described above. Three proposals with the highest technical scores were determined to be in the competitive range, with NAC’s proposal receiving the highest technical score. HRSA requested in writing that the competitive range offerors address certain technical and cost issues and submit revised proposals to HRSA by September 17, 2002. For example, HRSA requested that one of the prospective contractors other than NAC clearly describe its proposed management of day-to-day tasks of the contract and provide justification for several labor and travel expenditures. HRSA did not have oral discussions with the competitive range offerors. HRSA’s review committee evaluated the revised proposals and again gave NAC’s revised proposal the highest technical score. Although NAC’s estimated cost was not the lowest among the proposals in the competitive range, HRSA determined that NAC had proposed a realistic cost estimate for the contract. Accordingly, and in light of the NAC proposal’s high technical rating and the RFP’s evaluation criteria giving paramount consideration to the technical evaluation, HRSA determined that NAC’s proposal represented the best value to the government. HRSA awarded a contract to NAC on September 27, 2002. The contract had a 1-year base period of performance with an estimated value of $854,681, and included 2 option years for a total potential value of $2,673,784. According to a HRSA official, this cost-reimbursement contract did not include a fee. All of the prospective contractors were made aware that a debriefing to explain the selection decision and contract award would be provided at their request. One prospective contractor requested and received a debriefing from HRSA. No protests were filed with the agency challenging the award of the contract to NAC. There were no bid protests filed with GAO. HRSA officials told us that they did not identify any actual or potential organizational conflicts of interest during the acquisition process. As defined in the Federal Acquisition Regulation (FAR), an organizational conflict of interest arises where because of other activities or relationships, a person is unable or potentially unable to provide impartial assistance or advice to the government; or the person’s objectivity in performing the contract work is or might be otherwise impaired; or a person has an unfair competitive advantage. An organizational conflict of interest may result when factors create an actual or potential conflict of interest during performance of a contract, or when the nature of the work to be performed under one contract creates an actual or potential conflict of interest involving a future acquisition. Under the FAR, contracting officers are required to analyze planned acquisitions to identify and evaluate potential organizational conflicts of interest as early in the acquisition process as possible, and to take steps to avoid, neutralize, or mitigate significant potential conflicts of interest before a contract is awarded. According to HRSA’s contracting officer, HRSA did not identify any actual or potential organizational conflicts of interest. In reaching this conclusion, the contracting officer told us that he reviewed the statement of work, including the background and objectives of the proposed contract, the stated purpose of the contact, the criteria established to evaluate the proposals, the past performance of the competitors, and NAC’s proposal. HRSA’s contracting officer also told us that he did not formally document his assessment of organizational conflict of interest. To identify research studies that examine the effectiveness of abstinence- until-marriage education programs among adolescents and young adults, we searched two reference database systems, PubMed and ProQuest. We used the following keywords to search for research studies that were published from January 1, 1998, through May 22, 2006: “virginity,” “abstinence education,” “abstinence and curriculum,” “abstinence only,” “teen pregnancy and prevention,” and “abstinence until marriage.” We reviewed the research article titles that were generated from the PubMed and ProQuest searches and identified articles that appeared to focus on the evaluation of the effectiveness of abstinence-until-marriage education programs. In cases where we could not determine, based on the title, whether a study appeared to focus on an abstinence-until-marriage education program evaluation, we reviewed a summary of the article to obtain more information about the research study. We also examined previous summaries of the literature to identify additional research studies. We then selected research studies for inclusion in our literature review if they met three criteria. First, the study evaluated a group-based, abstinence-until-marriage education program. We did not select studies that evaluated one-on-one interactions, such as education programs focused exclusively on parent-child interactions, or that evaluated media campaigns. We reviewed the description of each education program and curriculum, as described in the study, to determine whether an abstinence- until-marriage education program was being evaluated. Education programs that were described as including detailed contraceptive information in their curricula, for example, were not classified as abstinence-until-marriage programs. Second, the study targeted adolescents and young adults in the United States, for example, by indicating that participants in the evaluation were high school or middle school students. Third, the study was a quantitative rather than a qualitative evaluation of an abstinence-until-marriage education program. We selected 13 research studies for inclusion in our literature review. We reviewed the selected research studies to obtain detailed information about their methodologies and outcome variables. For example, we determined whether each study used an experimental or quasi- experimental design and whether the outcome measures included attitudes, behavioral intentions, behaviors such as initiation of sexual intercourse, or a combination of these. In addition to the contact named above, Kristi Peterson, Assistant Director; Kelly DeMots; Pam Dooley; Krister Friday; Julian Klazkin; and Amy Shefrin made key contributions to this report.
Reducing the incidence of sexually transmitted diseases and unintended pregnancies is one objective of the Department of Health and Human Services (HHS). HHS provides funding to states and organizations that provide abstinence-until-marriage education as one approach to address this objective. GAO was asked to describe the oversight of federally funded abstinence-until-marriage education programs. GAO is reporting on (1) efforts by HHS and states to assess the scientific accuracy of materials used in these programs and (2) efforts by HHS, states, and researchers to assess the effectiveness of these programs. GAO reviewed documents and interviewed HHS officials in the Administration for Children and Families (ACF) and the Office of Population Affairs (OPA) that award grants for these programs. Efforts by HHS and states to assess the scientific accuracy of materials used in abstinence-until-marriage education programs have been limited. This is because HHS's ACF--which awards grants to two programs that account for the largest portion of federal spending on abstinence-until-marriage education--does not review its grantees' education materials for scientific accuracy and does not require grantees of either program to review their own materials for scientific accuracy. In contrast, OPA does review the scientific accuracy of grantees' proposed educational materials. In addition, not all states that receive funding from ACF have chosen to review their program materials for scientific accuracy. In particular, 5 of the 10 states that GAO contacted conduct such reviews. Officials from these states reported using a variety of approaches in their reviews. While the extent to which federally funded abstinence-until-marriage education materials are inaccurate is not known, in the course of their reviews OPA and some states reported that they have found inaccuracies in abstinence-until-marriage education materials. For example, one state official described an instance in which abstinence-until-marriage materials incorrectly suggested that HIV can pass through condoms because the latex used in condoms is porous. HHS, states, and researchers have made a variety of efforts to assess the effectiveness of abstinence-until-marriage education programs; however, a number of factors limit the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs. ACF and OPA have required their grantees to report on various outcomes that the agencies use to measure the effectiveness of grantees' abstinence-until-marriage education programs. In addition, 6 of the 10 states in GAO's review have worked with third-party evaluators to assess the effectiveness of abstinence-until-marriage education programs in their states. Several factors, however, limit the conclusions that can be drawn about the effectiveness of abstinence-until-marriage education programs. Most of the efforts to evaluate the effectiveness of abstinence-until-marriage education programs included in GAO's review have not met certain minimum scientific criteria--such as random assignment of participants and sufficient follow-up periods and sample sizes--that experts have concluded are necessary in order for assessments of program effectiveness to be scientifically valid, in part because such designs can be expensive and time-consuming to carry out. In addition, the results of efforts that meet the criteria of a scientifically valid assessment have varied and two key studies funded by HHS that meet these criteria have not yet been completed. When completed, these HHS-funded studies may add substantively to the body of research on the effectiveness of abstinence-until-marriage education programs.
DOD defines category I items as those that are highly explosive, extremely lethal, portable, and a potential threat if they were to be used by unauthorized individuals or groups. Category I missiles and rockets are nonnuclear and handheld. The missiles are the Stinger, Dragon, and Javelin; the rockets are the light antitank weapon (LAW) and the AT4. The Stinger can destroy aircraft in flight, and the Dragon and Javelin missiles and the LAW and AT4 rockets can pierce armor. Category II munitions and explosives are hand or rifle grenades, antitank or antipersonnel mines, C-4 explosives, TNT, and dynamite. See appendix I for pictures of the category I missiles and rockets. In September 1994, we reported that many serious discrepancies in the quantities, locations, and serial numbers of handheld category I missiles indicated inadequate management oversight for these lethal weapons. Further, we reported that the services did not know how many handheld missiles they had in their possession because they did not have systems to track by serial numbers the missiles produced, fired, destroyed, sold, and transferred. At that time, we could not determine the extent to which any missiles were missing from inventory. We also stated that security measures were not uniformly applied at all locations where missiles were stored. Our report contained several recommendations to the Secretary of Defense to correct these problems. In addition, the Army Inspector General conducted two follow-up studies and found similar problems. DOD has taken actions to correct the deficiencies cited in our September 1994 report. In that report, we recommended that DOD conduct independent worldwide inventories of category I missiles to establish a new baseline number. DOD established the new baseline number as of December 31, 1994, as shown in table 1. The Army, the Navy, and the Marine Corps are the primary purchasers of category I missiles; consequently, our review and the prior report focused on their inventories. Our prior report also recommended that DOD establish procedures to track, document, and report additions to and deletions from the new inventory baseline. Since that time, the Army has begun modifying its automated system—the Standard Army Ammunition System—to report changes to the inventories of Stinger, Dragon, and Javelin missiles by serial number. The modification to the system is designed to provide item managers at all Army commands with 24- to 72-hour notification of changes to the inventory. In the interim, the Army has implemented manual reporting procedures to track handheld missiles on a monthly basis. This temporary system has a 30- to 45-day time lag in reporting changes to the missile inventory. The Navy and the Marine Corps have also implemented automated systems to track category I missiles. The Navy’s automated system is intended to provide information within 24 to 48 hours on where a given missile is located, and the Marine Corps’ system is intended to provide such information within 24 hours. In addition, our prior report recommended that DOD establish procedures to include a random sampling of missile containers during inventories to ensure that they contain missiles. The services have since established procedures to verify the presence of missiles inside their containers during maintenance checks. Finally, our report recommended that DOD reemphasize security procedures and reexamine the current security policy. In response, the services reemphasized physical security regulations for all category I munitions. Although the services established a baseline inventory count of category I missiles as of December 31, 1994, updates to the baseline continue to be made as additional missiles are located or errors are discovered. Discrepancies existed at some sites between records of the number of category I missiles in their inventories and our physical count, but we were able to reconcile the discrepancies manually. Even though missile containers are being opened and serial numbers are being verified, random checks are not being performed because the services stated that they would be too costly. Also, DOD has not fully complied with physical security regulations at all of its sites. Army officials stated that, because of prior reporting, weaknesses involving the handheld missile inventory, they cannot fully assure that the category I missile baseline is completely accurate. The baseline had to be updated several times since its establishment because additional missiles were located. In February 1996, the Army discovered it had not counted 3,949 missiles during the initial inventory, which increased its baseline by almost 7 percent. Some of the missiles had been in transit and were not counted by either the shipping or receiving parties. Other missiles were being used by the Signal Communications Electronics Command in Fort Monmouth, New Jersey, for test purposes but were not included in the initial baseline inventory. A Stinger missile had been at a storage facility in Kuwait since September 1992. Pakistanis discovered the missile during post-Desert Storm cleanup operations, and Kuwait did not return it to the United States until April 1996. However, the Army had previously reported that 6,373 Stinger missiles were shipped to and subsequently sent back from the Persian Gulf. Thus, the Army did not realize that this missile had been missing from inventory until after it was discovered. Also, errors in the initial inventory count have affected the baseline. For example, two missiles on the Army item manager’s contractor database actually belonged to another country through the Foreign Military Sales program. These missiles, which were included in the baseline number, were at the contractor’s facility for repair. At the time of our visit, one of the missiles was still at the facility, and the other had been fixed and returned. The item manager stated that the contractor was not reporting to her the number of missiles received, completed, and returned. However, as a result of our finding, the contract has been modified to provide the item manager a monthly report of the missiles received at the contractor’s facility and the missiles transferred from the contractor’s facility to a DOD facility. In our September 1994 report, we noted that records of the number of category I missiles in some sites’ inventories did not match our physical count. This problem still exists at the Army and Marine Corps sites we visited, but we were able to reconcile the discrepancies manually. At a Navy storage site, we found no discrepancies between the item manager’s records and our physical count. At the Army military storage location we visited, we found discrepancies between the item manager’s records and the missiles we counted at the storage facility. All of the missiles that were on the item manager’s records, but not at the storage location, had been issued to units for training. We used the Army’s monthly interim reports to reconcile the discrepancies. We verified that these missiles had in fact been expended during training exercises. The item manager still had the missiles on the records because of the lag time in receiving the interim reports. We also found five discrepancies with our missile count at a Marine Corps site that we visited. All of the discrepancies involved the serial numbers. One missile was not on the item manager’s records because the wrong serial number was keyed into the system. Two missiles were upgraded and their serial numbers changed; the new serial numbers, however, were not yet changed on the database that we used to conduct our reconciliation. Two of the six digits in one missile’s serial number were apparently transposed on the container. Finally, one missile’s correct serial number was in both the depot’s and item manager’s systems, but the wrong number was apparently stenciled on the container. We also found discrepancies at two contractor facilities where both the Stinger and Dragon were being upgraded or modified. Most of the discrepancies were due to the lag between the time we received the database and the time we performed our physical count. Many missiles on the item manager’s records had already been sent to the DOD storage sites by the time we conducted our inventory count. We verified that the DOD storage sites had received the missiles. However, we found four additional missiles at one of the contractor facilities that were not on the item manager’s records. The item manager had recorded that one of the missiles, still at the contractor’s facility, was made non-lethal (demilitarized). Eight additional missiles were also listed as being at that contractor’s facility, but six were actually at another location, and two belonged to other countries, as stated previously, under the Foreign Military Sales program. Finally, we noted a practice during this review, in addition to those that have been previously mentioned, that complicates serial number tracking: giving new serial numbers to missiles that have been upgraded. Stinger missiles that are undergoing a technical upgrade will be given new serial numbers once the upgrade has been completed. According to a Production Assurance and Test Division official, U.S. Army Missile Command, the justification for changing the serial numbers was that the missiles would, in effect, become new missiles, since they would be broken down into major component parts and reassembled with different components. Both the old and new serial numbers would then be cross-referenced. However, a Quality Assurance official, U.S. Army Missile Command, stated that he had opposed changing the serial numbers because it would be harder to track the life cycle of the missiles and that cross-referencing old and new serial numbers would create additional bookkeeping and the potential for transposition and other errors. Instead of changing the serial numbers, the upgraded missiles could be distinguished by adding a suffix to the serial number. Even though the services have established procedures to verify the presence of missiles inside their containers, a representative sample is not always being selected, according to the services, because it would be too costly. For example, an Army official said that during maintenance checks only the missiles that are easy to access in a storage facility are selected to be opened. This methodology does not provide complete assurance that missiles are not being stolen because it may not deter insider theft. Moreover, opening a representative sample of missile containers helps to obtain assurance that all reported missiles do exist, are held by the services, and are owned by DOD. This check improves the accuracy of the missile inventory reports for item managers as well as DOD’s financial statements required by the CFO Act. We opened 108 missile containers to verify the presence of the correct missile in each container. Figures 1 and 2 show opened Stinger and Dragon missile containers. All containers had a missile, but the serial number on one container did not match the one on the missile. Neither the item manger nor the site officials could determine the reason for the mismatch. In another instance, a contractor official discovered that a missile going through an upgrade did not have the same serial number as its container. The correct container was at the storage depot, and the missile inside belonged in the container located at the contractor’s facility. Also, according to an Army policy notice, the sample size and the results of missile container checks are to be reported to the item managers. However, we found that Army item managers were not receiving this information. As a result of our finding, the Chief of Staff, Army Materiel Command, issued a memorandum reemphasizing the reporting requirement. Some of the sites we visited were not in full compliance with service or DOD security regulations. Personnel at one Army location were not inspecting all vehicles leaving the storage area. The Army Inspector General’s 1996 report also noted that not all sites were fully enforcing physical security regulations. The Army Inspector General included the National Guard in its follow-up review of handheld missiles. In its report, the Inspector General noted that National Guard sites were storing category I Dragon missiles in violation of DOD and Army physical security policies. Both of these policies permit the National Guard to use the missiles for training purposes only and store them temporarily at Guard installations. However, the Inspector General found that some sites had the Dragon missile in storage for many years. As a result of the Inspector General’s report, the Army National Guard was directed to return the Dragon missiles to the storage sites. Since that time, all missiles have either been returned or used for training. The National Guard requested approval to permanently store Dragon missiles at selected sites. The Army denied this request because some storage sites were not in compliance with its physical security regulations. For example, armed guards were not used to prevent unauthorized access of the storage structures when intrusion detection systems were inoperable. However, if a site can meet physical security regulations, the Army stated it would reconsider a request only to temporarily store Dragon missiles at selected sites. Contractors are required to follow DOD Manual 5100.76, Physical Security of Sensitive Conventional Arms, Ammunition, and Explosives, for their security guidelines. These regulations are not as stringent as the Army’s physical security regulations. For example, Army regulations require that storage sites be secured with two locks and keys and that no one person have possession of both keys at the same time. DOD regulations permit one lock and key, which allows single individuals access to storage sites. We noted the following conditions, among others, at one of the contractor facilities we visited: The entrance to the storage area was not locked. No guard was available to check vehicles entering or exiting the storage area. There was no clear zone outside the security fence. (This area was cleared, however, after our visit.) One employee had keys to operate the locks to the storage site, security fence gate, and gate to a perimeter road that led to the main road. This employee also had the code for calling in to security to deactivate the intrusion detection system. We observed this employee leave the storage site in a truck, proceed to unlock the perimeter gate, and exit. We believe that allowing one person such access leaves the missiles more vulnerable to theft. After bringing this concern to the attention of the Commander, Army Materiel Command, a memorandum was issued requiring that the security requirements of Army Regulation 190-11 and the Army Materiel Command supplement requiring that storage sites be secured with two locks and keys, among other things, be included in contracts for activities involving category I munitions. The services have different procedures and requirements for maintaining oversight of AT4 and LAW rockets. The Army and the Navy manage AT4 and LAW rockets by production lot and quantity. The Marine Corps maintains oversight and visibility of AT4 rockets (it does not have any LAW rockets) by serial numbers. Although we found no missing rockets in our physical count, three AT4 rockets that were sent to the Persian Gulf for Operation Desert Storm are missing from the Marine Corps’ inventory. The investigations were closed on these three missing rockets, but no conclusions were reached by the Naval Criminal Investigative Service on whether the rockets were missing, lost, or stolen. The Marine Corps adjusted their physical inventory to reflect the decrease of the three AT4 rockets. However, the serial numbers will remain within its accounting and reporting system should these rockets be recovered. The Army manages AT4 and LAW rockets by production lot and quantity. However, the Army item manager’s oversight of the AT4 rocket extends only to the quantities that are issued to the various major commands. Each major command then redistributes AT4 rockets to the installations within that command, and oversight for installation inventories is maintained by the major command. The item manager, therefore, does not know the quantities of AT4 rockets at the installation level. The Army is developing a system, called Unique Item Tracking, for all of its category I munitions, including the AT4. This system is intended to provide weekly reports showing the serial number of each munition by location. The purpose of the system is to identify the last accountable location of a weapon in the event that it is lost or stolen and recovered by law enforcement or other organizations. However, the system will not include the LAW rocket, since it is being phased out of the inventory, and most LAWs do not have serial numbers. The Navy also manages AT4 and LAW rockets by production lot and quantity. The Navy item manager does not oversee the rockets by serial number because it is not a requirement. This situation could be problematic if a rocket is missing because the Navy does not have a system in place to identify the missing rocket by serial number. However, some storage locations report AT4 rockets by serial numbers in addition to production lot and quantity. We conducted a physical count of AT4 and LAW rockets at Army, Navy, and Marine Corps storage sites and were able to match the physical count with the item managers’ records. We also opened 89 containers to verify the presence and correct serial number of each rocket. We did not note any violations in the physical security regulations at the sites we visited. Another issue related to accountability over sensitive defense material relates to the financial management system. In accordance with the CFO Act of 1990, each agency is to establish an integrated financial management system. Establishing an integrated, general ledger controller system, which ties together DOD’s accounting systems with its logistics and other key management systems, is critical if DOD is to effectively ensure oversight and control over its sensitive materials. For example, an integrated accounting and logistics system will automatically update both sets of records when missiles or other sensitive inventory items are purchased and received. In addition, carrying out rudimentary controls, such as periodically reconciling DOD’s accounting and logistics records, will help oversee and identify any unaccounted for in-transit items. Audit reports have repeatedly pointed out, however, that DOD’s existing accounting and related systems, including its logistics systems, are not integrated and lack a general ledger. As part of DOD’s efforts to reform its financial operations, the DOD Chief Financial Officer has stated that DOD will develop property accountability systems that will meet the federal government’s system requirements. If properly designed and implemented as part of a DOD-wide integrated financial management systems structure called for under the CFO Act, these systems will be integral to ensuring effective accountability over DOD’s sensitive inventories of missiles and rockets and other sensitive material. We did not find any documentation that terrorists or other extremists had stolen category I handheld missiles or rockets or category II grenades, mines, and explosives from DOD arsenals. Intelligence and DOD officials said that it is more likely that terrorists would seek handheld surface-to-air missiles or other munitions from sources other than DOD arsenals. International terrorist groups receive financial aid and other forms of assistance from several nations. The Secretary of State has determined that these countries have repeatedly provided support for acts of international terrorism by supplying, training, supporting, or providing safehaven to known terrorists. Intelligence officials told us that there are a variety of places around the world for terrorists to obtain weapons. For example, several countries besides the United States, including Bulgaria, China, Egypt, France, Japan, Czech Republic, Pakistan, Poland, Romania, Sweden, and the United Kingdom produce handheld surface-to-air missiles. Terrorists tend to favor small conventional weapons—handguns, rifles, grenades, machine guns, or explosives—because they can be easily transported and hidden from view. C-4 plastic explosives can be purchased from several countries. In addition, law enforcement officials told us that extremist groups have made their own C-4. Terrorists have used plastic explosives. For example, less than one pound of Semtex, similar to C-4, was used to bring down Pan Am Flight 103 over Lockerbie, Scotland, in 1988. There have been thefts of category II munitions and explosives by uniformed and DOD civilian employees that involved quantities of items such as grenades, C-4 explosives, and TNT. We previously reported that military inventories remain more vulnerable to employee theft than outside intrusion. Table 2 shows the types and quantities of category II items reported missing, lost, or stolen from 1993 to 1996. Some of the weapons were recovered. According to a law enforcement official, DOD could not determine whether any of the unrecovered stolen DOD weapons were in the hands of terrorists or other extremists. We recognize that DOD has made significant strides in gaining visibility and accountability over its handheld missile inventory. DOD has implemented several recommendations from our prior work and has already taken action to correct some of the problems we cite in this report. We believe, however, that DOD can take some additional actions to further improve physical security and ensure accurate reporting of its inventory of missiles and rockets. Therefore, we recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force and the Commandant of the Marine Corps to develop a cost-effective procedure for periodically revalidating the category I inventory baseline by, for example, matching item managers’ records with site records annually at a representative sample of storage sites; develop a cost-effective procedure for opening containers of missiles and rockets, for example, by selecting a representative sample of pallets, rather than individual missiles and rockets, to inspect; manage category I rockets by serial number so that the item managers will have total visibility over the numbers and locations of rockets; establish procedures for ensuring that serial numbers are not changed during upgrades and modifications of category I missiles and rockets; and continue to emphasize compliance with physical security requirements. In commenting on a draft of this report, DOD concurred with all of our recommendations (see app. II). DOD noted that it had already begun taking action to address several of the recommendations. For example, the services have developed or are developing procedures for revalidating the category I baseline. DOD also plans to issue guidance to manage category I rockets by serial numbers, develop procedures to ensure that serial numbers are not changed during upgrades and modifications of category I missiles and rockets, and continue to emphasize compliance with physical security requirements. DOD concurred with our recommendation to develop a cost-effective procedure to open containers of missiles and rockets. DOD’s response also cited various existing regulations, which require that samples selected for inspection be representative of the entire lot under evaluation. We discussed the comments with an official from the Office of the Secretary of Defense and pointed out that during our review we found that this was not always being done. For example, an Army official told us that some inspectors only select and inspect the missiles that are easy to access in a storage facility. The Office of the Secretary of Defense officials agreed to issue guidance reinforcing the need to follow these procedures. We met with officials from the Office of the Secretary of Defense, the Army, the Navy, the Marine Corps, and the National Guard regarding the oversight and physical security of category I missiles and rockets and the physical security of category II weapons. We discussed the actions taken to correct problems cited in our 1994 report. We also met with officials from the intelligence and law enforcement agencies to discuss the vulnerability of category I missiles and rockets to theft by terrorists and other extremists and obtain information on category I and category II weapons that are missing, lost, or stolen. We excluded the Air Force because of the limited number of missiles and rockets in its possession and because that service was not included in our prior report. Based on initial discussions on the scope of our work, the Army Inspector General added the National Guard to its follow-up review of handheld missiles. Because the Inspector General went to the same sites that we planned to visit, we did not visit any National Guard sites. To determine whether changes made to the oversight of category I missiles have improved the services’ visibility over these missiles, we physically counted about 15,000 Stinger, Dragon, and Javelin missiles by serial number at selected Army, Navy, and Marine Corps storage sites and two contractor facilities. We selected sites that had a comparatively high incidence of problems found during our first review. We opened 108 missile containers to ensure that a missile was in the container. To inventory the missiles, we used the item managers’ automated database. We then entered this information into a notebook computer. On site, as we physically inventoried, we entered into the computer the serial number of each of the missiles at that location. This information was automatically compared against the database from the item managers. Missiles that were not in the database or at the storage location were reconciled with site and item manager information. We also counted 6,637 AT4 and LAW rockets at randomly selected Army, Navy, and Marine Corps storage sites. At these locations, we opened 89 containers (which contained different quantities of rockets depending on the type) and physically verified the presence of 403 AT4s and 261 LAWs. We used the same procedures as the missiles to inventory the rockets at the Marine Corps storage site. At the Navy and the Army rocket storage sites, an automated database of serial numbers was not available from the item managers. At these two locations, we matched the inventory count against the item manager’s or major command’s records. We tested the reliability of the systems’ data by physically counting the missiles and rockets and matching the count to the item managers’ records; however, we did not test whether the information was provided to the item managers within 24 to 48 hours. We conducted our review from September 1996 to July 1997 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested congressional committees. Copies will also be made available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Sandra D. Epps Tracy W. Banks The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the actions taken by the Department of Defense (DOD) to correct weaknesses cited in GAO's September 1994 report on the military services' most sensitive category I missiles and to determine if problems still remained. GAO also reviewed DOD's oversight of category I rockets and the vulnerability of category I missiles and rockets and category II grenades, mines, and explosives to theft from U.S. military arsenals by terrorists or extremists. GAO noted that: (1) DOD has taken actions to improve the oversight of category I handheld missiles; (2) it conducted a worldwide inventory of handheld missiles; established a new baseline inventory count as of December 31, 1994; and implemented procedures to track changes to the baseline; (3) DOD also established procedures to check containers to ensure that each had a missile and verify serial numbers; (4) DOD reemphasized physical security procedures to be followed at its facilities; (5) despite DOD's progress toward better oversight of handheld missiles, some weaknesses remain; (6) adjustments continue to be made to the baseline as additional missiles are located and errors are discovered; (7) discrepancies still exist between records of the number of missiles and GAO's physical count; (8) the missiles may be vulnerable to insider theft because DOD is not always selecting a representative sample of containers to be opened during maintenance checks; (9) some facilities are not fully complying with DOD physical security requirements; (10) although GAO was able to match the physical count of AT4 and light antitank weapon (LAW) rockets at each site visited with the item manager's records, GAO also found oversight weaknesses with the category I rockets; (11) the Marine Corps reported three AT4 rockets missing from shipments returning from the Persian Gulf after Operation Desert Storm; (12) the Naval Criminal Investigative Service reached no conclusions on whether the rockets were missing, lost, or stolen, and the investigations were closed; (13) the services have different procedures and requirements for maintaining oversight of the rockets; (14) DOD's accounting and related systems, including its logistics systems, are not integrated; (15) in accordance with the Chief Financial Officer's (CFO) Act of 1990, each agency is to establish an integrated financial management system; (16) establishing an integrated, general ledger controller system, which ties together DOD's accounting systems with its logistics and other key management systems, is critical if DOD is to effectively ensure oversight and control over its sensitive material; (17) GAO did not find any documentation that terrorists or other extremists had stolen any category I handheld missiles or rockets or category II munitions or explosives from DOD arsenals; (18) some weapons continue to be vulnerable to insider theft as quantities of various category II items have been stolen by uniformed or DOD civilians; and (19) DOD and intelligence sources did not have any indication that the stolen items were intended for terrorists.
AGOA, signed into law on May 18, 2000, was designed to promote free markets, stimulate economic growth in SSA, and facilitate SSA’s integration into the global economy. According to the Office of the U.S. Trade Representative (USTR), AGOA provides duty-free access to U.S. markets for more than 6,000 dutiable items in the U.S. import tariff schedules. All 48 countries in SSA are potentially eligible for AGOA, but some have not met the eligibility criteria, and the program currently has only 40 beneficiaries. See figure 1. Most U.S. imports of textiles and apparel from SSA countries come from no more than 10 countries (Ethiopia, Kenya, Lesotho, Madagascar, Mauritius, Nigeria, South Africa, Swaziland, Tanzania, and Zambia). Together, these countries account for 97 percent of U.S. textile and apparel imports from SSA. A key feature of AGOA is its provisions for duty-free preferences for specific textile and apparel goods subject to rules of origin limitations. Eligibility for textiles and apparel benefits is based on conditions more selective than the general AGOA conditions and is only available to select AGOA countries. AGOA provides duty-free and quota-free treatment for eligible apparel articles made in qualifying SSA countries through 2015. Qualifying articles include: Apparel made of U.S. yarns and fabrics; Apparel made of SSA (regional) yarns and fabrics until 2015, subject to a Apparel made in designated SSA lesser-developed countries (LDC) of third-country yarns and fabrics originating anywhere in the world, until 2012, subject to a cap—commonly referred to as the “third-country fabric provision;” Apparel made of yarns and fabrics not produced in commercial quantities in the United States; and Textile or textile articles originating entirely in one or more lesser- developed beneficiary SSA countries; certain cashmere and merino wool sweaters; and eligible hand-loomed, handmade, or folklore articles, and ethnic printed fabrics. Industrialization in many developed countries was initiated in the textiles and apparel sectors, and some developing countries have relied on these sectors to significantly increase and diversify exports, with positive effects on incomes, employment, and poverty levels. Proponents of AGOA anticipated that by providing generous preferences for imports of textiles and apparel from AGOA-eligible countries, AGOA beneficiaries would be able to leverage these advantages to replicate this industrialization process. After AGOA was implemented, there was an initial surge of U.S. textile and apparel imports from beneficiary countries. U.S. imports of SSA products from SSA increased from $776 million in 2000 to about $1.8 billion in 2004. However, after 2004, when quotas under the Multi-Fiber Arrangement (MFA) were removed, U.S. imports of these products from SSA declined by about one-third, to $1.2 billion in 2008. See figure 2. Although AGOA provides some of the most generous preferences under any U.S. trade program, as figure 3 shows, in 2008, SSA countries accounted for 1.3 percent of total U.S. textile and apparel imports. In contrast, China accounted for 35 percent of U.S. imports of textiles and apparel, while Bangladesh and Cambodia accounted for 3.8 and 2.6 percent, respectively. In 2008, U.S. textile and apparel imports from China were 28 times the value of those from SSA countries. In that same year, U.S. textile and apparel imports from Bangladesh were 3 times those from all SSA countries combined. Moreover, U.S. imports of textile and apparel products from SSA are predominantly apparel. As illustrated in table 1, apparel constitutes 98 percent of this type of U.S. import from AGOA beneficiaries, while yarn, fabrics, and made-ups represent less than 2 percent of all U.S. textile and apparel imports from SSA. By contrast, U.S. imports of these products from all countries make up a larger part—23 percent—of textile and apparel trade. The modest share of U.S. imports of textile and apparel inputs, including yarn and fabrics, from SSA countries reflects not only limited production of these inputs in the region, but also the fact that an integrated apparel and textile sector with potential to serve as an engine of economic development is not available. Several studies and experts have pointed out that current trends in U.S. textile and apparel markets are less conducive to African sourcing because low-cost Asian producers (China, India, and Bangladesh) with relatively modern production facilities have developed a competitive advantage, challenging SSA textile and apparel in the United States and elsewhere. Also, the U.S. market has experienced significant consolidation in the retail sector resulting in lean retailing methods—the combination of low inventories and frequent restocking. Lean retailing requires retailers to closely track their sales using electronic data to facilitate fast communication with suppliers. From the standpoint of suppliers, the method demands great flexibility, as they must be able to adjust output, and ship and deliver products quickly. Less flexible suppliers that can only compete on selling cost and not on timeliness are at a disadvantage. As a result, aspects of the lean retailing method do not favor African suppliers that have less advanced production technology that limits their flexibility to meet changing demands. Duty-free access for textile and apparel imports from SSA countries under AGOA reduces the competitive edge of low-cost Asian producers. However, duty-free access alone may not overcome the advantages Asian producers enjoy due to long-standing, established trade channels. Africa’s lack of resources to significantly improve its trade infrastructure—power, water, production facilities, etc.—adds to the disadvantages of sourcing from SSA countries. Furthermore, underdeveloped production facilities, including aged existing plants and equipment, increase the cost of production while reducing quality and variety. SSA’s challenging business climate, primary corruption and political instability, adds to the difficulty of attracting new and increased investment. Uncertainty about AGOA’s duration and preference erosion (a weakening in the effectiveness of preferences due to falling prices in the world market caused by general trade liberalization) also limit the attractiveness of beneficiary countries for foreign and domestic investors. The ITC study on the competitiveness of apparel and textile inputs identified products that have the potential to be produced competitively in SSA countries, such as cotton yarn, cotton knit fabric, cotton denim fabric, and woven cotton shirting. Cotton is widely cultivated in the region, and is the primary fiber currently used in the production of yarn and fabric in SSA countries. These products can either be directly exported or used in downstream production of apparel for export. Other items cited for potential competitive production in the region were niche products that supply narrow markets such as organic cotton products, woven wool fabric, high-tech and industrial fabrics and local print fabrics. As the global demand for organic and environmentally friendly goods increases, organic cotton products, for example, might be competitive in the global marketplace. Woven wool fabrics and high-tech and industrial fabrics, which South Africa currently supplies to the United States and Europe, also have the potential to be more competitive in these markets. Africa has a long tradition in mostly hand-loomed local print fabrics. Although these fabrics are mainly produced for local markets, they may have potential export markets as home furnishings. Another promising dimension is production for Africa’s own local and regional markets. Many African countries produce fabrics that reportedly are not of sufficient quality for U.S. and European markets. However, local and regional marketing of such items may be profitable and may encourage backward and forward supply chain linkages in the long run. As a whole, the competitive production of textile and apparel inputs in SSA countries varies, as each beneficiary has its own factors that contribute to or inhibit production. The ITC report notes that one of the biggest challenges affecting production of textile and apparel inputs in SSA countries is the lack of regional demand for these products. Based on our review of the ITC study and other related research, and in consultation with trade and industry experts, we identified four issue areas where possible changes to AGOA or other U.S. trade preference programs could be made to improve the competitiveness of the textile and apparel inputs sector in SSA beneficiary countries. The four issue areas include (1) extending the duration of AGOA provisions and making AGOA permanent, (2) expanding AGOA LDC benefits to all beneficiaries and duty-free eligibility for other textile products, (3) creating non-punitive and voluntary incentives, and (4) preserving existing benefits under AGOA and modifying other preference programs and trade agreements. The panel of experts GAO convened discussed and ranked nine specific options for congressional consideration in each area. Panelists ranked each option on a 7-point priority scale that ran from “extremely low priority” to “extremely high priority.” Among the specific options, the panel ranked extending the duration of the third-country fabric provision for LDCs beyond 2012 and extending the duration of AGOA beyond 2015 as being an extremely high and very high priority, respectively, for congressional consideration. Experts explained that these steps are essential to attract investment in the textile and apparel inputs sector because companies need to have certainty that AGOA benefits will persist, as investments in the industry are usually long term. A more detailed discussion of the issue areas and accompanying options follows. The options considered under this issue area were to: Extend the duration of the third-country fabric provision for LDCs beyond 2012 to provide potential investors with greater long-term certainty about the program’s benefits. Extend the duration of AGOA beyond 2015 to provide potential investors with greater long-term certainty about the program’s benefits. Make AGOA benefits permanent to provide potential investors with greater long-term certainty about the program’s benefits. The options to extend the duration of AGOA and its third-country fabric provision for LDCs stem primarily from the desire to enhance predictability for investors, who are risk-averse and reluctant to make long-term commitments in SSA with AGOA and its third-country provision set to expire in 2015 and 2012, respectively. According to the ITC report, textile and apparel firms in SSA have difficulty securing much-needed capital to cover operating expenses and finance costly infrastructure improvements. Without adequate investment, SSA countries are unable to capitalize on AGOA benefits. Exacerbating the situation, much foreign direct investment fled SSA after the 2004 removal of global textile and apparel import quotas. Panelists expressed the opinion that extending AGOA would encourage investment in Africa. According to previous GAO analysis, a surge in trade is typical upon implementation and renewal of trade preference programs. One panelist highlighted the idea that extending AGOA would enhance predictability for investment in the “missing middle” of the supply chain, enabling raw materials produced in Africa, such as cotton, to be used in local fabric and other inputs production. However, some panelists noted that there are currently other trade policy measures being developed with non-SSA regions and countries that compete with SSA. They argued that priority must be given to extending AGOA relative to other trade programs because of SSA’s competitive disadvantage and to prevent preference erosion. The options to extend AGOA and its special provisions thus garnered considerable support from the panel of experts and emerged as very high and extremely high priorities, respectively. The option to make AGOA permanent arose in response to concern from investors and trade experts that a limited extension is inadequate to ensure long-term sustainability. Although this option would make AGOA program benefits permanent, each country would have to maintain its eligibility. There are currently many free trade agreements that offer duty- free benefits on a continuing basis, a significant change from when AGOA was first implemented. As a result, one panelist emphasized that AGOA needs more predictability for beneficiaries to compete with countries and regions with which the United States has free trade agreements. However, other expert testimony and previous GAO analysis raised concerns about the trade-off between investment predictability and the ability to leverage trade liberalization in developing countries, a cornerstone of broader global trade policy. One panelist observed that permanence could potentially provide a disincentive to implementing other internal changes in countries’ economies that might allow them to become more competitive. Furthermore, AGOA permanence would not be sufficient to overcome the fact that the overall structure of the global textile and apparel trade has shifted, consolidating benefits enjoyed by Southeast Asian producers. As a result of these critiques, the option to make AGOA permanent received slightly less support than the options extending its duration and was assigned a generally high priority by panelists. The options considered under this issue area were to: Expand third-country fabric provision to Sou th Africa to improve regional tegration in the textile and apparel sector. Expand AGOA LDC benefits to all AGOA ben tegration in the textile and apparel sector. Options to expand the scope of AGOA benefits to other SSA countries, especially to South Africa, are intended to encourage regional integration by fostering trade between African countries, and to broaden the use of AGOA. One African country official noted that the regional benefits of AGOA cannot be measured solely by U.S. import numbers. Countries many benefits through increased regional sourcing and integration. Despite the fact that the textile sector is one of the most regionally integrated, South Africa is not included in the rules of origin provision that allows use of third-country fabric in qualifying duty-free exports. Industry sources identified in the ITC report suggest that broadening third-country fabric benefits to South Africa would “lead to greater economies of sc ale and expansion in the apparel industry” by supporting backward and forward integration and development in the textile sector. A South Afric embassy representative called for such an extension by explaining that despite South Africa’s non-LDC classification, some economic sectors are characterized by low levels of development. According to the ITC re “industry sources stress that duty-free eligibility in the U.S. and EU markets for South African textiles could make a substantial contributio to the industry’s competitiveness and that the downward trend in the industry might be reversed if rules of origin were amended to allow greater access to third-country fabrics for South African apparel n exporters.” As another example of an attempt to boost industry competitiveness, the ITC report cites South Africa’s creation of “industry clusters consisting of firms from the textile, clothing, retail, and other sectors that work cooperatively” to offer world class manufacturing. ITC officials, however, stated that this approach would be difficult to pu rsue elsewhere in SSA due to inadequate infrastructure. Some panelists recommended modifying this option to simply extend AGOA LDC benefits to all SSA countries, but the primary focus remained on South Africa. Bot h the option to expand third-country fabric provisions to South Africa and the one to expand LDC benefits to all AGOA be support, emerging as generally high priorities. The option considered under this issue area was to: Create a voluntary “duty credit” program for U.S. importers of appare from AG region. OA beneficiaries that is manufactured using fabric from the The option to create non-punitive incentives to encourage use of regional inputs offers a way to stimulate voluntary regional investment. The non- punitive focus is a direct response to the negative results of the previously implemented “abundant supply” provision, which penalized the insufficient use of domestic fabric by disallowing duty-free eligibility. The ITC report cited one industry source as suggesting an “earned import allowance program,” similar to those in place for Haiti and the Dominican Republic, as a possible approach to creating non-punitive incentives to encourage use of regional inputs. Such an incentive program would allow apparel producers to earn the right to use third-country fabric, provided they use specified volumes of regional fabric. Some panelists, however, pointed out that this program is intended to facilitate the exchange of U.S. content in a specific bilateral relationship, which is not entirely relevant to AGOA. The earned import program was thus rejected in favor of the option of a simple “duty credit” approach. A simplified duty credit program would create a non-punitive incentive for use of African regional fabric. For example, a U.S. firm that imports jeans made with African origin denim would earn the right to import jeans from Bangladesh, duty free. A ratio could then be set to account for co differences, such as a specified square meter equivalent of African origin jeans earning a credit for a specified square meter equivalent of Bangladeshi origin jeans. Panelists that supported this option focused especially on its voluntary and non-punitive nature. However, one panelist ch to expressed concern that the duty credit program is an indirect approa g African textiles that would be ineffective. Ultimately, the duty stimulatin credit program received extensive support, emerging as a very high p riority. The options considered under this issue area were to e options considered under this issue area were to: : Refrain from extending trade preferences provided under AGOA to Refrain from extending trade preferences provided under AGOA to outside SSA to preserve benefits for textile and apparel production in outside SSA to preserve benefits for textile and apparel production in ficiary countries. AGOA beneficiary countries. AGOA beneModify rules of origin provisions under other U.S. trade preference Modify rules of origin provisions under other U.S. trade preference programs or free trade agreements to provide duty-free access for programs or free trade agreements to provide duty-free access for products that use AGOA textile and apparel inputs. products that use AGOA textile and apparel inputs. Simplify AGOA rules of origin to allow duty-free access for certain Simplify AGOA rules of origin to allow duty-free access for certain partially assembled apparel products with components originating outside partially assembled apparel products with components originating outside the region. the region. GAO-08-443. Round negotiators. In effect, this provision would undercut the exclusive benefits currently enjoyed by AGOA beneficiaries. A private-sector representative on the panel said many companies believe that duty-free, quota-free access will go into effect if the Doha Round is successfully concluded, and the belief is already affecting their decisions on where to invest. On the other hand, other experts on the panel pointed out that duty-free, quota-free access, as currently under consideration in Doha Round negotiations, would cover only 97 percent of tariff lines. The provision to exclude 3 percent of tariff lines could be used to protect trade preferences for countries that are less competitive in key sectors, such as textile and apparel production. A panelist representing international textile and apparel producers in Africa said that some of the major manufacturers doing business in AGOA countries have stated that they would move production to Bangladesh or Cambodia if these countries are granted duty-free access for textile and apparel. Given the potentially critical impact of extending AGOA benefits to LDCs outside Africa, panelists gave the option to refrain from extending preferences to non-SSA LDCs a very high level of priority for congressional consideration. Options to modify rules of origin provisions under AGOA and other U.S. trade preference programs or free trade agreements are intended to benefit SSA textile and apparel input production by providing opportunities to combine production with U.S. trade partners in other regions. Panelists suggested an option for simplifying rules of origin provisions in AGOA to grant duty-free access for partially assembled garments that are jointly produced in other countries that are U.S. free trade partners or benefit from U.S. trade preferences. For example, a consultant doing business with companies producing in AGOA countries explained that one of his clients had expressed interest in assembling high- value shirts in one AGOA beneficiary with collars and cuffs produced in a non-AGOA country, but was unable to because of rules of origin restrictions. According to ITC officials and the Harmonized Tariff Schedule, such partial assembly of garments is currently allowed under AGOA, but confusion persists among SSA manufacturers and outside experts. Furthermore, one expert expressed concern that placing an increased emphasis on partially assembled garments might relegate SSA manufacturers to lower value-added production. The ITC study also refers to African government and industry sources’ recommendations for changing the rules of origin under non-AGOA trade preference programs and free trade agreements to allow apparel made with SSA fabric to qualify for duty-free treatment in the United States. Similarly, one panelist representing companies that produce in Africa noted that the U.S.- Morocco Free Trade Agreement could theoretically be modified to provide duty-free benefits for textile and apparel items produced with inputs AGOA countries. However, other panelists indicated such changes to m existing free trade agreements would have minimal impact on the competitiveness of AGOA producers. They noted that, due to the small scale of textile and apparel production in the region and the distance between AGOA beneficiaries and countries that have a free trade agreement with the United States, such arrangements would probably not be viable. Panelists assigned these options to modify rules of origin provisions as generally high priorities for congressional consideration. As part of our review, we consulted with experts on measures the U.S. he government could take to help increase investment in and improve tcompetitiveness of textile and apparel inputs production in SSA, beyond d changes to AGOA. Based on our review of the ITC study and other relate research, and in consultation with trade and industry experts, n identified five issue areas in which the U.S. government could take actio n to improve the competitiveness of the textile and apparel inputs sector i AGOA beneficiary countries. These issue areas include (1) infrastructure development, (2) trade capacity building (TCB) assistance, (3) U.S. government international finance entities, (4) SSA regional integration, hese issue and (5) unfair trade practices of AGOA competitors. Under t areas, the panel discussed and ranked 25 specific options for congressional consideration. Panelists ranked each option on a 7-point priority scale that ran from “extremely low priority” to “extremely high priority.” The panel ranked funding regional trade hubs to provide T the industry and al extremely high priorities for Congress to consider. Many of the experts we c competitiveness of the textile and apparel inputs indus try in the region. Furthermore, they consider it necessary to address problems that affect competitiveness of the industry to maximize the benefits of AGOA. A m detailed description of the issue area and accompanying options is presented below. The options considered under this issue area were to: ealign U.S. trade policy and programs to support infrastructure and energy development in Africa to ensure trade preferences and assistance result in projects that will improve competitiveness of industries in the region. Increase collaboration with African governments and international donors to improve infrastructure and energy. eauthorize the Millennium Challenge Corporation (MCC) and adjust the legislation to allow more private-sector involvement, create regional compacts, and extend the duration of compacts. reate incentives for private-sector investment and provision of services in infrastructure and energy by leveraging resources in a manner that creates better business opportunities. Encourage programmatic coordination among U.S. government entities involved in development assistance and trade programs to develop infrastructure and energy projects that reduce the cost of doing business. Incorporate metrics to measure reduction in the cost of doing business fo infrastructure investment. Support renewable energy technology transfer to SSA countries that mig have a natural disposition for such production to address energy supply shortages. Options to support infrastructure development are intended to lead to a reduction in the cost of doing business in SSA countries. There is general agreement among experts we consulted, the ITC study, and other literature we reviewed that inadequate infrastructure is one of the main obstacles to doing business in Africa and one of the factors that most affects the competitiveness of production of textile and apparel inputs in SSA. Production of textile and apparel inputs is particularly affected by the lack of reliable power supplies, lack of abundant clean water, and poor transportation infrastructure. The ITC study reports that many SSA countries have among the highest cost rates and the most unreliable supply of electrical power in the world. According to the study, disruptions in electricity supply reduce productivity and add to cost. For example, a disruption in power supplies can ruin an entire production run in yarn and fabric mills, and increase cost do to the use of back-up generators. ITC also reported that the lack of an abundant supply of clean water in many SSA countries affects the production of textile and apparel inputs. Dyeing fabrics requires the use of clean water, which is contaminated in the process. Wastewater treatment capabilities are thus necessary to meet environmental compliance standards required in the international market. The ITC study and other reports indicate that poor transportation infrastructure is a major constraint to trade in SSA countries. Textile and apparel production is particularly affected in the region because poor transportation infrastructure inhibits the ability of producers to meet tight delivery schedules demanded by retailers. Delays in regional and international trade are caused by poor roads, railways, and ports. Recognizing the challenges that poor infrastructure places on trade and the textile and apparel sector in SSA countries, the panel discussed how U.S. assistance for infrastructure is supplied to the region and provided options that could improve its implementation. A panelist said that th limited coordination among MCC, assistance for infrastructure, other U.S. assistance, and U.S. trade po Several panelists agreed that infrastructure development in Africa mus strategically planned to benefit exports and regional industries, and coordination between U.S. trade and development agencies is necessary achieve that goal. For example, one panelist said that it is not only a matter of building a port, but making sure that the port functions well and is positioned to serve key industries. In the same context, a panelist representing textile and apparel producers in Africa indicated that for infrastructure development to have an impact in making the textile, apparel, and other industries more competitive, infrastructure projects must result in a reduction in the cost of doing business in SSA. Several f panelists mentioned the need to establish metrics to measure benefits o U.S.-sponsored infrastructure projects. Panelists also discussed the need to take a regional approach for infrastructure development in Africa; indicated that the bilateral approach that MCC takes in developing compacts limits the impact infrastructure projects can have. For example, one panelist said that the best way Kenya can reduce the cost of electrici tric is to invest in Ethiopia because Ethiopia has the greatest hydroelec power potential in East Africa. Thus, Kenyan investment in a hydroelec project in neighboring Ethiopia would benefit Kenyan consumers of electricity down the line because they would have greater access to cheaper electricity. A panelist from the private sector said that there n to be incentives created for the private sector to invest in infrastructu and that companies must see infrastructure projects in Africa as a business oppor sector interest in developing energy facilities in Ethiopia; however, there is a investment. Based on this discussion, panelists ranked the option on which provides a significant level of licy. Currently, 10 SSA countries have compacts with MCC. reauthorizing MCC and encouraging programmatic coordination for infrastructure development as extremely high priorities for congressional consideration. he options considered under this issue area were to: Reauthorize the Africa Global Competitiveness Initiative to provide funding for U.S. Agency for International Development (USAID) trade hubs to provide TCB assistance to SSA. Provide resources to USAID trade hubs designated for TCB assistan address the competitive disadvantages the textile and appa sector faces by implementing business solutions and increased marketing. Align U.S. TCB and development assistance with AGOA to ensure that addresses the competitive challenges and industries, such as the textile and apparel inputs industry. Increase and promote organic production and fair labor and trade practices t ro improve SSA countries’ potential to attract international etailers that emphasize these practices. Intensify U.S. assistance to the SSA cotton industry to improve prod and further integrate cotton production with the textile and apparel industry. Options on TCB emphasize the need to have a stronger connectio between trade preferences and development assistance to address competitive disadvantages in the textile and apparel inputs industry and improve business opportunities in SSA countries. TCB is considered by many experts w competitiveness of the textile and apparel inputs industry in SSA. However, there is a lack of funding directed at the textile and apparel input industry. While AGOA authorizing legislation refers to TCB, as we e consulted to be a key component of improving the previously reported, funding for this type of assistance is not provided under the act. USAID delivers TCB assistance in Africa through four regional trade hubs, which are funded by the Africa Global Competitiveness Initiative, scheduled to expire in 2010. While several panelists expressed support fo r reauthorizing this initiative to provide funding for the regional trade hubs, one government official noted that congressional reauthorization doe s not mean that funding will be provided. Funding would need to be prov ided separately through the appropriations process. A contractor who manag the West and Southern African trade hubs explained that there is no funding earmarked for assistance to the textile and apparel inputs industry, which makes it very difficult to implement targeted technical assistance projects. Rather, TCB assistance for the textile and apparel inputs sector comes out of limited discretionary assistance funding. Th contractor estimated that less than $1 million was spent by the two trade hubs on providing assistance to the textile and apparel industry in 2008. Nevertheless, according to the ITC study, textile and apparel industry representatives said that TCB provided by USAID trade hubs has advance regional and international market opportunities. TCB assistance provided to the textile and apparel industry includes projects such as business-to - business events to foster trade linkages between the textile and appar el producers throughout Africa and a cross-section of the apparel industry doing business in the region. Acc such an event resulted in an estimated $8 million in new trade deals. Industry sources indicated that greater TCB assistance for the textile and apparel inputs sector is needed. ording to a USAID fact sheet, in 2007, Almost all members of the panel agreed that sufficient funding should be provided for TCB projects that increase the competitiveness of the textile and apparel industry by improving the ability to do business in the regi on. One panelist representing the private sector said that the reason AGOA has had limited results in the textile and apparel inputs sector is that there has not been a “supply response”—the textile supply industry did not respond to the trade opportunity AGOA created because of the indus limited capacity. Several panelists agreed that to maximize the benefi AGOA, problems that affect competitiveness of the industry must be addressed, such as low labor productivity, inability to meet industry quality standards and volume requirements, and transport efficiency try’s ts of problems. A panelist representing textile and apparel producers in Africa indicated that better coordination is needed between U.S. government trade policy and trade capacity assistance, allowing TCB to complement trade preferences and improving competitiveness. One panelist said th achieve an integrated chain of production in the textile and apparel industry, TCB must be provided to other sectors in the supply chain. For example, to create an integrated chain of production in the textile and apparel industry, more assistance should be given to the African cotton industry. Also, for SSA countries to compete in the global market, assistance should be given to promote organic production and fair labor and trade practices, which may attract global retailers that emphasize these practices. The panel assigned options regarding funding for regional trade hubs to support the textile and apparel industry, and aligning TCB with AGOA as extremely high priority for congressional consideration. The options considered under this issue area were Review and adjust the Overseas Private Investment Corporation’s (OPIC) mandate to allow greater flexibility to support U.S. investment in t and apparel inputs production in SSA countries. Increase Export-Import (Ex-Im) Bank lending and guarantees to f investment in the SSA textile and apparel sector. Institute tax-related incentives for U.S. firms making a positive impac AGOA countries to encourage companies to do business in these countries. Increase support for institutions to provide access to finance for investment, supplier credit, and day-to-day operations. Increase flexibility of OPIC, the Ex-Im Bank, and the U.S. Trade and Development Agency (TDA) to address local content and economic effects restrictions for AGOA countries. Options to improve support of U.S. government international finance entities for textile and apparel production in SSA are aimed at attract investment that could help make the industry more globally competitive This would be particularly important for textile production, which is a capital-intensive industry. The ITC report notes several interrelated factors that affect industries’ ability to competitively supply textile and apparel inputs: the cost and availability of capital (finance); the age of plants and ing . equipment; and the cost and quality of the labor pool. Firms need access to working capital to finance day-to-day operations and as longer-term capital investment to upgrade plants and equipment. However, firms in many SSA countries face high domestic bank lending rates, which can harm competitiveness. Therefore, they often use internal funds to finance operations. Foreign direct investment also has been an important source of capital for some SSA textile and apparel producers, particularly larger exporters. Much of the foreign investment in textiles and apparel co from Asian countries, with a few other European and African countries also holding ownership shares. However, a substantial amount o and apparel-related foreign direct investment has left some SSA countries t since quotas under the MFA were lifted in 2004, and overall foreign direc investment to SSA countries has declined. The expert panel, our own research, and industry and government submissions to the ITC have identified some options for consideration by Congress. One submission to the ITC noted that access to U.S. government-sponsored or multilateral support will need to be enhanced if textile production is to become more globally competitive. It noted that U.S.-government sponsored financing entities, such as OPIC and the Ex-Im Bank, have typically been reluctant to participate in African textile production because doing so could be politically controversial. For example, OPIC officials stated that OPIC’s ability to provide guarantees f U.S. investors is limited by its screening criteria, which rules out projec that could have a negative effect on U.S. employment. The Ex-Im Bank’s statutory focus is on promoting U.S. exports by supporting U.S. export or those who are importing/purchasing U.S.-made products, such as textile machinery. One of our panelists noted that the Ex-Im Bank and TDA have restrictions on the amount of foreign content that can be included in project and still qualify for guarantees or other support. In addition, the complications involved in complying with such requirements can be a disincentive for U.S. firms that want to do business in SSA countries. Some panelists urged that the United States provide more flexibility for the financing agencies, but one panelist raised a concern about whether such flexibilities would be available for all countries or whether they would be restricted to African countries. A written submission to the ITC by a representative of an African textile- and apparel-producing country n that it would be beneficial if the United Sates could make available a line of credit (through OPIC, the Ex-Im Bank, or other entities) to assist private firms with viable expansion or modernization projects in the textile and apparel sector. This representative also sugg consider making available an equity fund that could co-invest with local and foreign investors in projects in the textile and apparel sectors. ested that U.S. policymakers Although one panelist raised some questions about how these latter options would be implemented, overall, the panel expressed a high degree of agreement in favor of increased flexibilities for U.S. financing agencies and exploring other means to provide financing or investment funds for the SSA textile and apparel inputs industry. The options considered under this issue area were to: Support regional economic communities to help enhance the vertic integration and competitiveness of textile and apparel industries. lace a higher priority on support of regional economic programs in U.S. development programs. lace a higher priority on regional efforts under U.S. development programs, such as the African Global Competitiveness Initiative and M to encourage economic integration. Create incentives for countries to participate in regional economic communities. Support a general capital increase for the African Development Bank. Options to support regional integration stem from a recognition that each SSA country is unlikely by itself to achieve full vertically integrated production, with linkages throughout the supply chain. According to panelists, SSA countries must be able to work together to develop an efficient, competitive textile and apparel industry. While there are a number of structures and organizations (such as the Southern African Customs Union, Common Market of Eastern and Southern Africa, and the African Union) that foster regional integration in Africa, SSA countries st face numerous obstacles that hamper competitiveness, such as tariffs on cross-border trade, regulations, and access to transportation and energy networks. he options considered under this issue area were to: Increase U.S. resources to expand monitoring and en regarding export subsidies and other unfair trade practices related to textile and apparel imports. onitor U.S. imports of Chinese textile and apparel to expedite self- initiation of dumping and countervailing duty cases. Apply pressure to deter Chinese intellectual property violations related to African ethnic textile designs. Options were suggested for the United States to employ trade remedie address unfair practices of competitors that may indirectly affect the competitiveness of SSA textile and apparel production and prompt relevant discussions at the WTO. Recent trade data, our discu experts, and the ITC report indicate that SSA countries face challenges retaining their small share of global trade compared with other major textile and apparel product exporters, such as China, India, Bangladesh, Cambodia, and Vietnam. As a major importer of African apparel products, the U.S. market is crucial to the continued development and competitiveness of African textile and apparel industries. However, if other competitors access the U.S. market while employing trade practices ssions with in that violate existing agreements or are otherwise unfair, they not only m have an adverse impact on U.S domestic industry but indirectly ha competitiveness of African producers as well. This report is intended to provide Congress a range of options put forward by experts on ways to improve the competitiveness of SSA textile and apparel production so that AGOA beneficiary countries can better take advantage of the opportunities provided under the program. These options will likely be considered within broader congressional deliberations on improving U.S. trade preference programs. Many of these options may b helpful, but as GAO has previously reported, trade-offs are inherent in trade preference programs. For example, although many experts agreed on the priority of extending the duration of AGOA beyond 2015 to pr potential investors greater long-term certainty about the program’s benefits, others raised concerns that this could undermine the ability of African countries to grow beyond the need for a trade preference prog and fully integrate into the global trading system. Similarly, although limiting certain trade preference benefits to LDCs makes sense, expert argued that enhancing the competitiveness of SSA textile and apparel inputs production necessitates regional integration; thus, extending benefits to more advanced economies such as South Africa may be appropriate. Furthermore, the link between trade policy and economic development complicates potential policy responses. AGOA h benefits for textile and apparel, but many SSA countries face infrastructure and development challenges that must be addresse they can fully take advantage of these benefits. Export-oriented manufacturing cannot survive without adequate physical infrastructu while capacity-building assistance may be ineffective without global demand for production. Finally, government and other experts have stressed that African governments need to take action on governmental n the economic opportunities presented by trade reforms to capitalize o preference programs. We provided courtesy copies of the draft report to USTR and ITC, but did not request official comments. USTR and ITC staff provided informal technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, the Secretary of State, the U.S. Trade Representative, the Administrator of USAID, the Chairman of the ITC, the Chief Executive Officer of the MCC, and the Acting President of the Overseas Private Investment Corporation. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In this report, we present information on options put forward by experts for Congress to consider for (1) possible changes to the African Growth and Opportunity Act (AGOA) or other U.S. trade preference programs and (2) other measures the U.S. government could take to help increase investment in and improve competitiveness of textile and apparel inputs production in sub-Saharan Africa (SSA). To address these objectives, we reviewed the U.S. International Trade Commission (ITC) study on the competitiveness of textile and apparel inputs in AGOA beneficiaries conducted under the same mandate as GAO’s review, as well as other ITC reports on SSA and related hearing materials. We examined U.S. trade statistics on textile and apparel imports to the United States in recent years, which we determined to be sufficiently reliable for the purposes of this report. We also conducted a literature review on issues related to the textile and apparel industry and investment in SSA. We met with U.S. agency officials familiar with U.S. trade preferences and development programs, including the Office of the U.S. Trade Representative, the Department of Commerce’s Office of Textiles and Apparel, the U.S. Agency for International Development, the Millennium Challenge Corporation, and the Overseas Private Investment Corporation. We met with trade officials from 12 African embassies in Washington, D.C. We also interviewed knowledgeable individuals from academia and policy institutes, consultants involved in work related to U.S.-Africa trade, and private-sector representatives of U.S. and African textile industries and U.S. retail and apparel import industries. Through these sources, we identified numerous suggestions for how the U.S. government could support competitiveness in the African textile and apparel industry. Additionally, we convened a panel of experts and key informants on June 2, 2009. To select the experts and key informants for our panel, we identified broad categories of the types of individuals and representatives that would be needed to ensure we covered as full a range as possible of opinions and interests. We drew up a list of potential panelists for each of our categories based on our review of the literature and recommendations made by knowledgeable parties. Many of the panelists we invited had a special interest or expertise in Africa. Sixteen panelists were able to attend our panel on June 2, 2009, including representatives of relevant U.S. government agencies; private-sector firms and associations in the textile and apparel industry; and academia and think tanks. In addition, a representative from the Washington-based African embassies’ working group on AGOA also attended. However, some of the panelists were not able to participate in all of the day’s sessions. We invited representatives from the African private sector, the Common Market for Eastern and Southern Africa, and the World Bank, but those individuals were not able to attend. To the extent possible, we conducted interviews with or obtained written input from experts who were not able to attend our panel. There are differing opinions about whether promoting textile and apparel production in SSA countries should be a priority under AGOA. Some Africa experts suggest that there should be a greater focus on agricultural production, an area were SSA countries appear to have a greater competitive advantage. Similarly, other development experts question whether the benefits provided under AGOA should be exclusive to SSA countries, and support the idea of extending trade preferences equally among all lesser-developed countries (LDC). Our report does not take a position on these issues, but focuses on textile and apparel inputs production in AGOA beneficiary countries according to the requirements in the mandate. The panel discussed three topics: (1) the ITC’s analysis of potentially competitive products and challenges for the textile and apparel industry in SSA, (2) possible changes to AGOA or other U.S. trade preference programs, and (3) other measures to support African textile and apparel inputs production. To facilitate the discussions concerning the last two topics, we prepared lists of possible changes and other measures based on information and recommendations we obtained from knowledgeable parties and relevant literature. We presented these lists to the panel to introduce each topic and stimulate discussion. To obtain an overall sense of the panelists’ priorities for improvement, we conducted ranking exercises at the end of the discussions on possible changes to AGOA and other measures to support the African textile and apparel inputs industry. For these exercises, we relied on the lists of options we developed prior to the panel. During the discussions, we invited the experts to comment on the lists, and we made modifications or additions based on their input. After the panelists had discussed the options and agreed on the wording, we asked them to rank each on a 7- point priority scale that ran from “extremely low priority” to “extremely high priority.” (See app. IV for more details.) The options and associated priority rankings presented in this report are based on the opinions of the experts and key informants involved in the panel and should not be interpreted as GAO recommendations. According to generally accepted government auditing standards, GAO makes recommendations to correct identified problems and improve programs and operations when the potential for improvements is substantiated by the reported findings and conclusions. These standards generally require GAO to develop criteria, condition, cause, and effect to describe a problem. Due to GAO’s mandated reporting deadline for this project, which required us to submit a report within 90 days of the issuance of the ITC report on the same topic, we were not able to employ a methodology that allowed us to develop findings and conclusions according to these standards. The options considered under this issue area were to: Extend duration of third-country fabric provision for LDCs beyond 2012 to provide potential investors greater long-term certainty about the program’s benefits. Extend duration of AGOA beyond 2015 to provide potential investors greater long-term certainty about the program’s benefits. Make AGOA benefits permanent to provide potential investors greater long-term certainty about the program’s benefits. The options considered under this issue area were to: Expand third-country fabric provision to South Africa to improve regional integration in the textile and apparel sector. Expand AGOA LDC benefits to all AGOA beneficiaries to improve regional integration in the textile and apparel sector. The option considered under this issue area was to: Create a voluntary “duty credit” program for U.S. importers of apparel from AGOA beneficiaries that is manufactured using fabric from the region. The options considered under this issue area were to: Refrain from extending trade preferences provided under AGOA to LDCs outside SSA to preserve benefits for textile and apparel production in AGOA beneficiary countries. Modify rules of origin provisions under other U.S. trade preference programs or free trade agreements to provide duty-free access for products that use AGOA textile and apparel inputs. Simplify AGOA rules of origin to allow duty-free access for certain partially assembled apparel products with components originating outside the region. The options considered under this issue area were to: Realign U.S. trade policy and programs to support infrastructure and energy development in Africa to ensure that trade preferences and assistance result in projects that will improve competitiveness of industries in the region. Increase collaboration with African governments and international donors to improve infrastructure and energy. Reauthorize the Millennium Challenge Corporation and adjust the legislation to allow more private-sector involvement, creating regional compacts and extending duration of compacts. Create incentives for private-sector investment and provision of services in infrastructure and energy by leveraging resources in a manner that creates better business opportunities. Encourage programmatic coordination among U.S. government entities involved in development assistance and trade programs to develop infrastructure and energy projects that reduce the cost of doing business. Incorporate metrics to measure reduction in the cost of doing business for infrastructure investment. Support renewable energy technology transfer to SSA countries that might have a natural disposition for such production to address energy supply shortages. The options considered under this issue area were to: Reauthorize the Africa Global Competitiveness Initiative to provide funding for U.S. Agency for International Development trade hubs to provide TCB assistance to SSA. Provide resources to USAID trade hubs designated for TCB assistance to address the competitive disadvantages the textile and apparel inputs sector faces by implementing business solutions and increased marketing. Align U.S. TCB and development assistance with AGOA to ensure that it addresses competitive challenges and disadvantages of export industries, such as the textile and apparel inputs industry. Increase and promote organic production and fair labor and trade practices to improve SSA countries’ potential to attract international retailers that emphasize these practices. Intensify U.S. assistance to the SSA cotton industry to improve production and further integrate cotton production with the textile and apparel industry. The options considered under this issue area were to: Review and adjust the Overseas Private Investment Corporation’s mandate to allow greater flexibility to support U.S. investment in textile and apparel inputs production in SSA countries. Increase Export-Import Bank lending and guarantees to facilitate investment in the SSA textile and apparel sector. Institute tax-related incentives for U.S. firms making a positive impact in AGOA countries to encourage companies to do business in these countries. Increase support for institutions to provide access to finance for investment, supplier credit, and day-to-day operations. Increase flexibility of the Overseas Private Investment Corporation, Export-Import Bank, and U.S. Trade and Development Agency to address local content and economic effects restrictions for AGOA countries. The options considered under this issue area were to: Support regional economic communities to help enhance the vertical integration and competitiveness of textile and apparel industries Place a higher priority on support of regional economic programs in U.S. development programs. Place a higher priority on regional efforts under U.S. development programs, such as the African Global Competitiveness Initiative and Millennium Challenge Corporation to encourage economic integration. Create incentives for countries to participate in regional economic communities. Support a general capital increase for the African Development Bank. The options considered under this issue area were to: Increase U.S. resources to expand monitoring and enforcement actions regarding export subsidies and other unfair trade practices related to textile and apparel imports. Monitor U.S. imports of Chinese textile and apparel to expedite self- initiation of dumping and countervailing duty cases. Apply pressure to deter Chinese intellectual property violations related to African ethnic textile designs. To obtain an overall sense of the panelists’ priorities, we conducted two ranking exercises at the end of the discussions on (1) possible changes to AGOA and (2) other measures to support the African textile and apparel inputs industry. For these exercises, we relied on the lists of options we developed prior to the panel. During the discussions, we invited the experts to comment on the lists, and we made modifications or additions based on their input. After the panelists had discussed the options and agreed on the wording, we asked them to rank each on a 7-point priority scale that designated “7” as an “Extremely High Priority, “6” as a “Very High Priority,” “5” as a “Generally High Priority,” “4” as a “Moderate Priority,” “3” as a “Generally Low Priority,” “2” as a “Very Low Priority,” and “1” as an “Extremely Low Priority.” We used electronic hand-held technology to facilitate this exercise, provide instant feedback, and also ensure anonymity for each panelist. This technology provided us with the average and distribution of votes for each option. We conducted two separate ranking exercises, the first for the AGOA-related measures, and the second for the other measures to improve the competitiveness of the African textile and apparel inputs industry. Of the 16 experts and key informants that participated in the panel, 14 were present for the morning session and took part in that ranking exercise on changes to AGOA. At lunch, two of the original panelists left and two others joined the panel. These changes were due to scheduling conflicts and had been discussed beforehand. Therefore, the composition of the 14 panelists that took part in the afternoon ranking exercise was slightly different from the 14 that took part in the morning exercise. In addition, after the first three categories of “other measures” had been discussed and ranked, five panelists had to leave the panel; the remaining nine panelists took part in the discussion and ranking exercise for the final two of the “other measure” categories. Moreover, in the afternoon sessions, not all of the panelists chose to rank every recommendation. For these reasons, the ranking exercises are not directly comparable; therefore, we present the results of the afternoon sessions in the five categories. An expert panel is a data gathering method that respects the views of all the experts, and experts with particular backgrounds or experiences can differ greatly; thus, we note in the text when there were differences of opinions on the options. In tables 2 and 3, we present the options ranked by highest mean score and provide the range of votes, along with the average scores, and the number of panelists that voted on each option to provide insights and transparency into the ranking exercises. However, the results of the ranking exercise should be understood in the context of the panelists’ discussions and not just in terms of the ranking exercise itself. In addition to the individual named above, the following persons made major contributions to this report: Juan Gobel, Assistant Director; Ann Baker; Gezahegne Bekele; Ken Bombara; Karen Deans; Martin de Alteriis; Francisco Enriquez; Ernie Jackson; and Michael Kniss.
According to U.S. government officials, sub-Saharan Africa's (SSA) textile and apparel industry has not achieved the growth anticipated under the African Growth and Opportunity Act (AGOA). Despite the tariff reductions under AGOA, after an initial surge, U.S. imports of these products from beneficiary countries have declined in recent years (see figure). In view of this outcome, the 2008 Andean Trade Preference Extension legislation required GAO to prepare a report identifying changes to U.S. trade preference programs "to provide incentives to increase investment and other measures necessary to improve the competitiveness of [SSA] beneficiary countries in the production of yarns, fabric, and other textile and apparel inputs." This report is intended to provide Congress a range of options put forward by experts for (1) possible changes to AGOA or other U.S. trade preference programs and (2) other measures the U.S. government could take to help increase investment in and improve competitiveness of SSA textile and apparel inputs production. Many of the options discussed by the panel of experts GAO convened address the need to consider the trade-offs inherent in trade preference programs. Furthermore, experts emphasized that the link between trade policy and economic development complicates potential policy responses. While AGOA has generous benefits for textile and apparel, many SSA countries face infrastructure and development challenges that must be addressed before they can fully take advantage of these benefits. Recognizing this interplay, GAO's panel of experts and key informants gave greatest priority to options they believed provide long-term investors with predictability of benefits and encourage regional commitments relative to other developing countries. Such options included: (1) Extending the duration of the third-country fabric provision for least developed AGOA countries beyond 2012, and (2) Extending the duration of overall AGOA benefits beyond 2015. The panel similarly gave greatest priority to the options for other development measures that focused on supporting investment through trade capacity building. Many experts considered trade capacity building to be a key component of improving the competitiveness of African textile and apparel inputs production, and in developing the physical and market infrastructure needed for a vibrant export sector. Such options included: (1) Funding regional trade hubs and focusing on market promotion and business linkages, and (2) Aligning U.S. trade capacity building and development assistance with AGOA objectives
Uranium undergoes a number of processing steps in the production of nuclear fuel. To ensure its efficiency and ability to be used safely in nuclear reactors, nuclear fuel must meet rigorous technical specifications. For example, if certain contaminants are present in the material, they must be at or below specified levels so as not to harm workers or the environment or contaminate equipment. Technetium, a radioactive metal that is produced as a by-product of fission in a nuclear reactor, is considered a contaminant by commercial specifications for nuclear fuel. Its presence in the nuclear fuel production process can contaminate equipment, lead to increased worker radiation doses, and raise environmental concerns. Therefore, specifications require that uranium that is to be enriched should contain no more technetium than one part per billion. USEC first discovered that some of the uranium DOE previously transferred to the corporation may have been contaminated with technetium in March 2000, when DOE requested that USEC sample uranium storage cylinders for technetium content. DOE believed that, during the 1970s, technetium-contaminated recycled uranium that it processed through certain production lines at the Paducah plant inadvertently left residual amounts of technetium in certain equipment. Subsequent processing of uranium using that equipment contaminated the material. USEC was able to determine that up to 9,550 metric tons of the 45,000 metric tons of uranium that DOE had transferred to the corporation prior to privatization had been processed through the contaminated production lines at Paducah and therefore was contaminated with technetium. USEC’s initial sampling indicated technetium contamination levels ranging from 11 to 148 parts per billion, all in excess of the commercial specification of one part per billion. In addition, DOE was able to determine that about 5,500 metric tons of uranium in its inventory had also been processed through the contaminated production lines at the Paducah plant and was also likely to be contaminated with technetium. USEC conducts uranium decontamination work using equipment at the Portsmouth plant. Figure 1 illustrates the decontamination process. Through the end of February 2006, USEC reported that about 960 metric tons, or 10 percent, of the 9,550 metric tons of technetium-contaminated uranium transferred to it by DOE prior to privatization remains to be decontaminated. DOE estimates USEC will finish decontaminating this uranium by the end of December 2006. In total, USEC has decontaminated about 6,500 metric tons of its contaminated uranium. Specifically: USEC decontaminated nearly 3,600 metric tons of its inventory between June 2002 and December 2003 under the terms of the June 2002 agreement between USEC and DOE. Under this agreement, DOE compensated USEC for its decontamination costs by taking title to some of USEC’s depleted uranium, reducing USEC’s costs for eventually disposing of the material. About 2,050 metric tons of USEC’s uranium were decontaminated between December 2003 and December 2004 under the terms of the April 2004 agreement between USEC and DOE. DOE compensated USEC for its decontamination costs using appropriated funds. USEC decontaminated approximately 842 metric tons of its uranium between December 2004 and February 2006 under the December 2004 agreement, which provided that USEC cover its decontamination costs using proceeds from the commercial sale of clean uranium transferred from DOE’s inventory to USEC for sale. The June 2002 agreement between DOE and USEC also provided for DOE to replace some of USEC’s contaminated uranium with clean uranium from DOE’s inventory. In October 2004, DOE exchanged 2,116 metric tons of USEC’s contaminated uranium with an equal amount of clean uranium from its inventory. In addition to USEC’s inventory, since October 2004 USEC has been decontaminating about 7,600 metric tons of contaminated uranium in DOE’s inventory: 2,116 metric tons exchanged with USEC in October 2004 and 5,517 metric tons of contaminated uranium that were already in DOE’s inventory. As of February 28, 2006, USEC had decontaminated 2,065 of the 2,116 metric tons it transferred to DOE in October 2004 and 248 of the 5,517 metric tons that was already in DOE’s inventory. DOE estimates USEC will finish decontaminating the 5,327 metric tons of contaminated uranium that remain in DOE’s inventory by the end of October 2008. Figures 2 and 3 illustrate the amount of technetium-contaminated uranium in USEC’s and DOE’s inventories. From June 2002 through the end of February 2006, USEC had invoiced DOE for decontamination costs totaling about $152 million. Of this amount, about $67 million was spent for direct costs, such as labor and decontamination equipment and supplies, and about $85 million was spent for indirect costs. These indirect costs included utilities and other plant overhead costs and administrative costs. Table 1 details USEC’s decontamination costs. DOE has compensated USEC for its decontamination services in three ways. First, DOE has paid USEC about $62 million in appropriated funds. Second, DOE officials told us that the department has taken title to about 30,000 metric tons of USEC’s depleted uranium, which DOE estimated in 2004 would cost the department about $27 million to convert to a more stable form. Third, DOE compensated USEC for its remaining decontamination services using the proceeds from the commercial sale of clean uranium transferred from DOE to USEC pursuant to the December 2004 agreement between USEC and DOE. In total, DOE has transferred about 1,100 metric tons of clean uranium to USEC for commercial sale under the December 2004 agreement. DOE transferred about 900 metric tons of clean uranium to USEC in December 2004, which USEC sold to four different buyers, resulting in total proceeds of $62 million. DOE officials told us that increases in market prices for uranium resulted in more money than DOE originally estimated. These additional proceeds allowed USEC to decontaminate about 280 metric tons more uranium than DOE originally believed the sale would fund. By February 2006, however, USEC had completely spent the proceeds generated from the sale of the 900 metric tons of clean uranium. Therefore, DOE transferred an additional 200 metric tons of clean uranium to generate additional funds for decontamination. USEC sold this uranium in February 2006, resulting in total proceeds of $22.4 million, which USEC expects will fund its decontamination services through June 2006. In addition, instead of transferring clean uranium to USEC and having USEC conduct additional uranium sales, DOE sold 200 metric tons of clean uranium in April 2006 to obtain money to compensate USEC for its decontamination services. These sales resulted in total proceeds of $23.4 million, which USEC expects will fund its decontamination services from July 2006 through November 2006. According to DOE officials, the department itself will likely conduct additional uranium sales to fund USEC’s decontamination services, rather than transferring additional uranium to USEC. DOE takes several steps to oversee USEC’s uranium decontamination activities, including reviewing monthly reports submitted by USEC detailing decontamination progress and costs and tracking the proceeds USEC generates from selling clean uranium that DOE has transferred to the corporation under the December 2004 agreement. DOE has also contracted with DCAA to audit USEC’s decontamination costs. However, DOE and DCAA have been unable to complete some of their oversight steps because they have been unable to obtain some financial and other data from USEC in a timely manner. As a result, DOE has some concerns about whether USEC consistently conducts decontamination work in a cost-effective manner and is currently uncertain whether the compensation the department provided the corporation matches USEC’s actual decontamination costs. DOE takes several steps to oversee USEC’s uranium decontamination activities. For example, DOE reviews a number of monthly reports that USEC submits to the department. These monthly reports contain detailed information on USEC’s uranium decontamination activities. Specifically, these reports include the following: Information on the amount of uranium decontaminated each month, USEC’s estimate of the remaining contaminated uranium in USEC’s and DOE’s inventories, and data on the level of technetium contamination for uranium storage cylinders before and after processing. These data verify whether the uranium in each cylinder meets commercial specification after it has been through the decontamination process. Summary data on USEC’s monthly decontamination costs as well as USEC’s estimate of the project’s total cost when the decontamination is completed. USEC also submits a breakdown of its costs into specific categories, such as, among other things, labor, employee benefits, materials, site security, and electricity. Information on waste generated from the decontamination process. DOE officials told us that they perform detailed analyses of these reports to verify that USEC is consistently conducting decontamination work in a cost-effective and efficient manner. If these officials identify inconsistencies or trends in the data that generate concerns or questions, they follow up with USEC each month through written inquiries to resolve uncertainties and obtain adequate justification for costs such as overtime and training. DOE officials at the Portsmouth and Paducah plants also conduct on-site inspections of the uranium cylinders in order to verify that USEC’s and DOE’s actual uranium inventories match what appear in USEC’s monthly reports. DOE also tracks the proceeds from USEC’s sale of clean uranium transferred to the corporation under the December 2004 agreement. DOE obtains copies of all sales contracts between USEC and the buyers of this uranium. These contracts provide detailed information on the buyer, the quantity sold, its sale price, and the date of the sale. In addition, USEC provides DOE with a copy of the wire transfer between the buyer and USEC to verify the receipt of funds. DOE requires that USEC segregate the proceeds of the uranium sales into an account separate from USEC’s other funds. USEC maintains these funds in a separate brokerage account that invests in tax-exempt short-term securities. Each month, USEC submits a cost invoice to DOE for the decontamination work it performed during the preceding month. DOE then reviews and approves USEC’s invoice and USEC withdraws money from the brokerage account equivalent to its invoiced costs. DOE monitors the withdrawal rate to estimate when more uranium will need to be sold to obtain additional funding for the account. Finally, DOE has also contracted with DCAA to audit the annual costs submitted by USEC, which DOE uses to verify that USEC’s decontamination costs match what DOE paid the corporation. To receive compensation for its indirect costs under the agreement, USEC provides estimates of its costs to DOE annually. These estimates, called “provisional billing rates,” are the basis of DOE’s compensation to USEC for its costs for that year. USEC submits monthly invoices to DOE using the provisional billing rates. DOE then compensates the corporation for its invoiced costs. Following the end of each calendar year, USEC is to submit financial data to DCAA that details the corporation’s actual incurred indirect costs. DCAA uses these data in its audits to verify that USEC’s incurred costs are reasonable. Any differences between USEC’s provisional billing rates and USEC’s incurred decontamination costs would mean either that DOE owes USEC additional money or that USEC owes DOE for any compensation in excess of incurred costs. DOE officials told us that they have had difficulties receiving complete and timely responses to their inquiries on USEC’s monthly reports. Following their detailed analyses of USEC’s monthly reports to verify that USEC is conducting decontamination work in a cost-effective and efficient manner, DOE often submits written inquiries to USEC to resolve inconsistencies or other concerns. For example, DOE officials have submitted numerous inquiries to USEC questioning the amount of overtime hours USEC has billed to the project, which these officials think are unusually high. DOE officials have also questioned the large amounts of worker training that USEC has billed to the project. In addition, DOE officials have also inquired about certain materials USEC has purchased. According to DOE officials, DOE submits about five concerns per month to USEC. However, in its comments on a draft version of this report, USEC told us that DOE submits about 15 inquiries per month. DOE officials told us that USEC sometimes takes up to 6 months before responding to DOE’s inquiries and then often only selectively respond to certain questions. In comments on a draft version of this report, USEC disagreed with DOE and stated that it has responded completely to DOE’s inquiries in an average of about 3 months. While USEC officials told us they attempt to provide timely responses to DOE’s inquiries, they also stated that the inquiries often request very specific data that are difficult to provide quickly. In addition, USEC officials told us that delays sometimes occurred when personnel from both DOE and DCAA were asking similar questions. USEC officials stated that they were sometimes confused about whether they should respond to DOE, DCAA, or both. Moreover, USEC indicated that DOE’s inquiries were often poorly communicated and not delivered to the appropriate personnel in a timely fashion. DOE officials indicated that they believed that the inquiries were adequately communicated and delivered to the appropriate USEC personnel in a timely fashion. Further, DOE officials stated that although some of the inquiries were more detailed, this would not justify the delays in USEC’s responses to the department. USEC officials also told us that despite their belief that DOE’s inquiries are often unnecessary and redundant, USEC is working to improve the timeliness and completeness of their responses. According to USEC officials, they met with DOE in March 2005 to try to reduce the size and redundancy of these inquiries. However, DOE officials stated that the reason for the apparent redundancy was USEC’s inability to respond to the original inquiries in a timely manner. DOE’s inquiries have resulted in some benefits to the government. For example, USEC officials told us that, in response to DOE’s inquiries, USEC has adjusted some monthly invoices to remove some charges USEC incorrectly billed to the project because of administrative errors. According to DOE officials, these errors were only discovered after DOE submitted written inquiries to USEC after it had analyzed USEC’s monthly reports. DCAA has also experienced delays in obtaining the financial data from USEC that are necessary to complete its annual audits of USEC’s decontamination costs. At the end of each fiscal year, USEC has 6 months to submit financial data to DCAA detailing the corporation’s indirect costs for that year. DCAA then completes an audit of these costs, which allows DOE to verify that USEC’s actual incurred costs for the year match what DOE paid the corporation. However, USEC has not submitted incurred cost data to DOE or DCAA for decontamination conducted during any time period from July 2002 to the present. DCAA has not completed any of its full annual audits of USEC’s incurred decontamination costs. DCAA has completed five limited-scope audits of USEC’s incurred costs for the individual months of December 2004 and January, March, May, and November 2005 to verify that USEC’s incurred costs are in accordance with applicable laws, regulations, and the provisions of the December 2004 agreement. According to DCAA officials, these limited audits of USEC’s monthly incurred costs have not found significant problems. In addition, DCAA has conducted other audits to examine, among other things, USEC’s internal controls and accounting systems. According to USEC, these other audits have not found significant deficiencies. According to USEC officials, the delays in providing incurred cost data to DCAA are caused by several factors including limited internal accounting resources that are familiar with Federal Acquisition Regulations and government cost accounting standards protracted contract negotiations with DOE over how employee pension and post-retirement benefits should be treated in USEC’s accounting systems. DOE officials with whom we spoke disagreed that these reasons should cause such a significant delay in providing incurred cost data to DCAA. USEC has submitted a revised schedule to DOE that estimates when it will provide incurred cost data to DCAA. (See table 2.) In the absence of DCAA audits of USEC’s annual decontamination costs, DOE has taken steps to protect the government’s interests by limiting the amount of compensation paid to USEC. For example, USEC has stated that its actual decontamination costs in calendar year 2004 exceeded DOE’s compensation for that year. However, because DCAA was unable to complete its audit of USEC’s costs for that year, DOE refused to pay this difference. In addition, provisional billing rates were not revised in 2005, and USEC was compensated using 2004 provisional billing rates. USEC officials told us that the failure to revise the provisional billing rates has only increased the difference between USEC’s actual decontamination costs and the amount the corporation is being compensated. According to USEC officials, the difference between the corporation’s actual decontamination costs and the amount it has been compensated is about $3 million and will continue to grow until new billing rates are approved by DOE. DOE officials told us that they plan to approve new billing rates in June 2006. Furthermore, DOE officials said that the department will pay USEC any difference between the corporation’s actual decontamination costs and the amount already compensated once USEC submits its actual incurred costs and DCAA has been able to complete its audits. Almost 8 years after USEC’s privatization, USEC and DOE are still dealing with the cleanup of technetium-contaminated uranium. According to DOE officials, the department decided to compensate USEC for decontaminating uranium to resolve potential legal liabilities and to help achieve other policy goals, such as the continuation of a reliable domestic source of uranium enrichment today and in the future. In our view, however, DOE has left the Congress and the public largely uninformed about these policy goals, as well as about the amount of progress USEC has made decontaminating uranium and the costs incurred in doing so. DOE deserves credit for attempting to protect the public interest by limiting the amount of compensation paid to USEC until the corporation provides the key financial data that are necessary for DOE’s oversight of USEC’s activities. However, because of the complexity of the issues, including the need to achieve multiple policy goals and the importance of maintaining a reliable, domestic source of uranium enrichment, it is important for DOE to provide the Congress with the information necessary for congressional oversight of the department’s activities. We are recommending that the Secretary of Energy clarify with USEC (1) the specific oversight steps that DOE and DCAA conduct and (2) procedures that USEC should follow in responding to the department’s and DCAA’s questions on the corporation’s performance. In addition, to assist the Congress in its continuing oversight of the department, we further recommend that the Secretary of Energy report the following information in DOE’s annual budget request to the Congress until USEC has completed uranium decontamination: the remaining quantities of uranium in USEC’s and DOE’s inventories that need to be decontaminated, the estimated costs of completing this decontamination work, the source of funds necessary to compensate USEC, and the progress DCAA has made completing the annual audits of USEC’s decontamination costs. We provided a draft copy of this report to DOE and USEC for their review and comment. DOE’s letter is presented as appendix II, and USEC’s letter is presented as appendix III. In its written comments, DOE agreed with our recommendations, but requested that any report to the Congress be done on an annual basis, as part of the annual budget process. We agree with DOE and have modified our recommendation to provide for DOE reporting uranium decontamination performance and cost information in its annual budget requests rather than semiannually. Both DOE and USEC commented that the report would be more accurate if it acknowledged the value and the successful performance of the program. DOE’s comments stated that the overall value of the program is not stated clearly and is somewhat overshadowed by detailed issues related to USEC’s cost reports. USEC believes that the report would be more precise if it acknowledged the successful technical and financial performance of the program. The objectives of our review were to provide factual information on USEC’s progress in decontaminating uranium and on DOE’s oversight of USEC’s uranium decontamination activities. Contrary to DOE’s and USEC’s assertions, our draft report clearly described what DOE and USEC officials told us were the benefits of the uranium decontamination agreements, including the amounts of uranium in USEC’s and DOE’s inventories that have been decontaminated, the technology developed to decontaminate the uranium, the continued employment of workers at the Portsmouth plant, and the maintenance of a reliable, domestic source of uranium enrichment. However, it is also important to note that these benefits did not come without significant cost. Specifically, DOE has provided over $150 million in various forms of compensation to USEC. To provide detailed information concerning the overall value of the program was beyond the scope of this review. USEC generally agreed with the draft report’s findings and supported our recommendations to DOE. However, USEC commented that the report contained shortcomings in the presentation of its supporting analysis. Specifically, USEC said that the draft report does not acknowledge that USEC provided detailed invoice data to DOE that conformed to DOE’s rules on invoice review. On the contrary, our draft report contained detailed information on the types of information provided to DOE including reports on the amounts of uranium decontaminated each month, the amounts of waste generated, and the decontamination costs incurred. USEC states that DOE’s rules contain no requirements for incurred cost submissions. However, as our draft report stated, the contract clause in Federal Acquisition Regulation §52.216-7, which is specifically incorporated in DOE’s agreements with USEC, requires contractors to submit their final indirect cost rates, based on actual costs, to the cognizant federal agency within 6 months of the end of the contractor’s fiscal year. USEC has not complied with this requirement. In addition, USEC stated in its comments that the draft report’s discussion of USEC’s delays in responding to DOE’s follow-up questions is incomplete and inaccurate. In response, we have modified our report to note USEC’s disagreement with DOE officials’ statements regarding the number of DOE inquiries each month and USEC’s responsiveness. USEC also stated that the draft report’s title overstates the report’s findings and implies a materiality to USEC’s delays that is not supported in the body of the report. We disagree that the draft report’s title makes this implication. USEC recommends that the title be changed to better reflect the report’s recommendation that clarification of procedures would improve DOE’s oversight of the uranium decontamination agreement. The purpose of the recommendation is not for DOE to change its oversight of USEC’s activities, as is implied by USEC’s suggested title. Rather, the recommendation is intended to encourage DOE to better communicate its existing oversight steps to USEC and instruct the corporation how to properly respond to the department’s inquiries. DOE and USEC also provided technical comments that we incorporated into the report as appropriate. We will send copies of this report to interested congressional committees, the Secretary of Energy, and USEC, Inc. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. At the request of the Chairman, Committee on Energy and Natural Resources, United States Senate, we examined (1) the United States Enrichment Corporation’s (USEC) progress in decontaminating technetium-contaminated uranium transferred to it by the Department of Energy (DOE) prior to its privatization and (2) DOE’s oversight of USEC’s decontamination activities. To accomplish these objectives, we reviewed the preprivatization agreements between DOE and USEC that transferred uranium inventories to the corporation; memorandums of agreement and memorandums of understanding between DOE and USEC on the decontamination of technetium-contaminated uranium, signed in June 2002, April 2004, October 2004, and December 2004; DOE and USEC legal memorandums detailing DOE’s potential liability to replace uranium or compensate USEC; Federal Acquisition Regulations; and appropriate statutes, including the Energy Policy Act of 1992 and the USEC Privatization Act of 1996. We also interviewed officials from DOE’s Portsmouth and Paducah Project Office; Oak Ridge Operations Office; Environmental Management Consolidated Business Center; Office of Environmental Management; Office of Nuclear Energy; Office of General Counsel; and Office of the Under Secretary for Energy, Science, and Environment. In addition, we interviewed USEC officials at the corporation’s headquarters in Bethesda, Maryland, and at the Portsmouth Gaseous Diffusion Plant in Piketon, Ohio. We also interviewed officials with the Defense Contract Audit Agency (DCAA), which conducts audits of USEC’s decontamination costs. To determine USEC’s progress in decontaminating uranium, we reviewed USEC’s monthly reports detailing its monthly decontamination progress as well as remaining uranium inventories to be decontaminated. We also reviewed USEC data on uranium storage cylinders processed each month and the specific amount of uranium in each cylinder. We also obtained USEC’s monthly cost statements submitted to DOE, which detail USEC’s monthly costs under a variety of categories, such as labor, plant overhead, and materials. We examined the reliability of uranium decontamination and cost data by obtaining responses from DOE to a series of data reliability questions covering issues such as data entry access, internal control procedures, and the accuracy and completeness of the data. We asked follow-up questions whenever necessary. We determined that these data were sufficiently reliable for the purposes of this report. Furthermore, we reviewed USEC’s marketing strategy for selling clean uranium transferred to the corporation by DOE under the December 2004 agreement and reviewed USEC’s sales reports submitted to DOE detailing the amount of uranium USEC sold to each buyer, the contract price of the uranium, its delivery date, and the date of payment. In addition, we reviewed the sales contracts between USEC and buyers of the clean uranium as well as invoices confirming receipt of funds from each uranium sale. We also visited the Portsmouth Gaseous Diffusion Plant site to inspect uranium decontamination facilities and to interview DOE and USEC officials. To assess DOE’s oversight of USEC’s uranium decontamination activities, we interviewed DOE officials that conduct oversight of USEC’s decontamination work at the Portsmouth and Paducah Project Office, Oak Ridge Operations Office, and the Environmental Management Consolidated Business Center. We discussed DOE’s processes for conducting analyses of USEC’s monthly reports on decontamination progress and costs and the steps DOE takes to oversee USEC’s sales of clean uranium transferred by DOE under the December 2004 agreement. We also discussed DOE’s oversight with USEC officials. In addition, we obtained copies of five audits conducted by DCAA of USEC’s monthly decontamination costs and interviewed DCAA auditors to discuss the objectives, scope, and methodology of DCAA’s audit work. We conducted our work between August 2005 and May 2006 in accordance with generally accepted government auditing standards. Appendix III: Comments from USEC, Inc. In addition to the contact named above, Diane B. Raynes (Assistant Director), Ryan T. Coles, Jessica A. Evans, Doreen S. Feldman, Christopher E. Ferencik, Neill W. Martin-Rolsky, Mehrzad Nadji, Omari A. Norman, Susan A. Poling, Katherine M. Raheb, Keith A. Rhodes, Susan D. Sawtelle, and Rebecca Shea made key contributions to this report.
Prior to the 1998 privatization of the U.S. Enrichment Corporation (USEC), the Department of Energy (DOE) transferred about 45,000 metric tons of natural uranium to USEC to, among other things, be enriched to fulfill USEC's nuclear fuel contracts. About 9,550 metric tons were subsequently discovered to be contaminated with technetium, a radioactive metal, at levels exceeding the specification for nuclear fuel. Although DOE has not admitted liability, DOE and USEC have entered into agreements under which USEC is decontaminating the uranium. DOE has compensated USEC for its decontamination costs in several ways, including using proceeds from sales of government-owned clean uranium. GAO was asked to examine (1) USEC's progress in decontaminating uranium and (2) DOE's oversight of USEC's decontamination activities. A forthcoming GAO legal opinion will address DOE's legal authority to transfer clean uranium to USEC for sale and use the proceeds to compensate USEC for its decontamination services. As of February 28, 2006, USEC reported that about 10 percent of the contaminated uranium that DOE transferred to the corporation prior to privatization remains to be decontaminated, or about 960 metric tons of the 9,550 contaminated metric tons transferred. DOE estimates USEC will finish decontaminating this uranium by the end of December 2006. Through the end of February 2006, USEC has invoiced DOE for a total of about $152 million in decontamination costs. DOE takes several steps to oversee USEC's uranium decontamination activities. DOE reviews monthly USEC reports that detail, among other things, the corporation's decontamination progress and costs. In addition, DOE, through the Defense Contract Audit Agency (DCAA), audits USEC to verify that USEC's actual costs match the amount DOE paid to the corporation and are in accordance with the provisions of the uranium decontamination agreement. However, DOE has had difficulties completing some of its oversight because of USEC's delays in providing financial data and other information. DOE officials told us that USEC sometimes takes up to 6 months before responding to its inquiries about the corporation's monthly reports. As a result, DOE has some concerns about whether USEC consistently conducts decontamination work in a cost-effective manner. DCAA has also experienced significant delays obtaining USEC financial data that it requires for its annual audit of USEC's costs. DOE uses these data to verify that USEC's actual decontamination costs match what DOE paid USEC. Until DCAA's audits are complete, DOE cannot be certain whether the compensation it provided to USEC matches USEC's actual decontamination costs. As a result, USEC may need to repay money to the government or DOE may owe additional money to USEC upon completion of these audits. In addition, the Congress has not received information to assist in the appropriations process on the progress and costs of decontamination.
State relies on a variety of decentralized information systems and networks to help it carry out its responsibilities and support business functions, such as personnel, financial management, medical, visas, passports, and diplomatic agreements and communications. The data stored in these systems is sensitive enough to be attractive targets for individuals and organizations seeking monetary gain or desiring to learn about or damage State operations. For example, much of this information deals with State employees and includes American and Foreign Service National personnel records, employee and retiree pay data, and private health records. Background investigation information about employees being considered for security clearances is also processed on State’s unclassified network as is sensitive financial and procurement information. The potential consequences of misuse of this information are of major concern. For example, unauthorized deletion or alteration of data could enable dangerous individuals to enter the United States. In addition, personnel information concerning approximately 35,000 State employees could be useful to foreign governments wishing to build personality profiles on selected employees. Further, manipulation of financial data could result in over- or underpayments to vendors, banks, and individuals, and inaccurate information being provided to agency managers and the Congress. Our objectives were to (1) determine how susceptible the State Department’s automated information systems are to unauthorized access, (2) identify what the State Department is doing to address information security issues, and (3) determine what additional actions may be needed. To determine how susceptible State’s systems are to unauthorized access, we tested the department’s technical and physical controls for ensuring that data, systems, and facilities were protected from unauthorized access. We tested the operation of these controls to determine whether they existed and were operating effectively. We contracted with a major public accounting firm to assist in our evaluation and testing of these controls. We determined the scope of our contractor’s audit work, monitored its progress, and reviewed the related work papers to ensure that the resulting findings were adequately supported. During our testing, we performed controlled penetration attacks at dial-in access points, the department’s Internet gateways, and public information servers. We also performed penetration activities to access security controls on State’s major internal networks. In addition, we performed social engineeringactivities to assess user awareness, and attempted to gain physical access to two State facilities. We attempted to access State’s sensitive data and programs under conditions negotiated with State Department officials known as “rules of engagement.” These rules were developed to assist us in obtaining access to State’s facilities and information resources and to prevent damage to any systems or sensitive information. Under the rules, all testing was required to take place within the department’s headquarters building between 8:00 a.m. and 10:00 p.m. and was physically monitored by State employees and contractor personnel. In addition, State monitors were authorized to stop our testing when we obtained access to sensitive information or systems. We were also required to inform State personnel about the types of tests we planned to conduct prior to the testing. As agreed with State, we limited the scope of our testing to unclassified systems. To identify what State is doing to address the issue of unauthorized access to its information systems, we discussed with department officials their efforts to protect these systems and reviewed supporting documentation. For example, we obtained information on the department’s initiatives to improve the security of its mainframe computers and establish a centrally managed information system security officer program at headquarters. We also discussed with department officials preliminary plans to expand the use of the Internet and reviewed supporting documentation. We reviewed numerous evaluations of information security at domestic State locations and foreign posts performed by the department’s Bureau of Diplomatic Security. We reviewed recent reports submitted by State to the President and the Congress under provisions of the 1982 Federal Managers’ Financial Integrity Act, which outlined known information management and technology weaknesses and plans for corrective actions. We reviewed the department’s policy guidance on information security as contained in the Foreign Affairs Manual, Volume 1 and Volume 12, Chapter 600, and its Fiscal Year 1997-2001 Strategic and Performance Management Plan for Information Resources Management. We visited a computer security assessment center in Fairfax, Virginia, which the department uses primarily for certifying and accrediting software to be used on State information systems. To evaluate State’s security program management and formulate recommendations for improvement, we compared State’s practices to guidelines in two National Institute of Standards and Technology (NIST) publications, the “Generally Accepted Principles and Practices for Securing Information Technology Systems” and “An Introduction to Computer Security: The NIST Handbook,” as well as other guides and textbooks. In addition, we reviewed a Department of State Inspector General report on unclassified mainframe systems security. We also relied on our work to identify the best information security management practices of non-federal organizations which is presented in our Executive Guide Information Security Management: Learning From Leading Organizations (GAO/AIMD-98-21 Exposure Draft, November 1997). The guide identifies key elements of an effective information security program and practices which eight leading nonfederal organizations have adopted and details the management techniques these leading organizations use to build information security controls and awareness into their operations. We performed our audit work primarily at State Department headquarters offices from July 1996 through August 1997 in accordance with generally accepted government auditing standards. Our penetration tests revealed that State’s sensitive but unclassified information systems can be easily accessed by unauthorized users who in turn can read, delete, modify, or steal sensitive information on State’s operations. First, while simulating outside attackers without knowledge of State’s systems, we were able to successfully gain unauthorized access to State’s networks through dial-in connections to modems. Having obtained this access, we could have modified or deleted important data, shut down services, downloaded data, and monitored network traffic such as e-mail and data files. We also tested internal network security controls and found them to be inadequate. For example, we were able to gain privileged (administrator) access to host systems on several different operating platforms (such as UNIX and Windows NT). This access enabled us to view international financial data, travel arrangements, detailed network diagrams, a listing of valid users on local area networks, e-mail, and performance appraisals, among other sensitive data. Our tests also found that security awareness among State employees is problematic. We were able to gain access to State’s networks by guessing user passwords, bypassing physical security at one facility, and searching unattended areas for user account information and active terminal sessions. For example, in several instances we were able to enter a State facility without required identification. In an unlocked work area for one office, we found unattended personal computers logged onto a local area network. We also found a user identification and password taped to one of the computers. Using these terminals, we were able to download a file that contained a password list. In another unlocked area, we were able to access the local area network server and obtain supervisor-level access to a workstation. With this access, we could have added or deleted users, implemented unauthorized programs, and eliminated audit trails. Our tests of dial-in-security, internal network security, and physical security demonstrated that information critical to State’s operations as well as to the operations of other federal agencies operating overseas can be easily accessed and compromised. For example, we gained access to information that detailed the physical layout of State’s automated information infrastructure. These data would make it much easier for an outsider who had no knowledge of State’s operations or infrastructure to penetrate the department’s computer resources. In addition, we obtained information on administrative and sensitive business operations which may be attractive targets to adversaries or hackers. At the conclusion of our testing, we provided senior State managers with the test results and suggestions for correcting the specific weaknesses identified. Our tests were successful primarily because State’s computer security program is not comprehensive enough to effectively manage the risks to which its systems and networks are exposed. For example, the department does not have the information it needs to effectively manage its risks—it does not fully appreciate the sensitivity of its information, the vulnerabilities of its systems, or the costs of countermeasures. In addition, security is not managed by a strong focal point within the agency that can oversee and coordinate security activities. State also does not have the types of controls needed to ensure the security of its sensitive information, including current and complete security policies and enterprisewide incident reporting and response capability. Moreover, top managers at State have not demonstrated that they are committed to strengthening security over the systems that they rely on for nearly every aspect of State’s operations. Our study of information security management at leading organizations identified the following five key activities that are necessary in order to effectively manage security risks. A strong framework with a central management focal point and ongoing processes to coordinate efforts to manage information security risks. Risk assessment procedures that are used by business managers to determine whether risks should be tolerated or mitigated and to select appropriate controls. Comprehensive and current written policies that are effectively implemented and then updated to address new risks or clarify areas of misunderstanding. Steps to increase the awareness of users concerning the security risks to information and systems and their responsibilities in safeguarding these assets. Ability to monitor and evaluate the effectiveness of policy and other controls. Furthermore, each of these activities should be linked in a cycle to help ensure that business risks are continually monitored, policies and procedures are regularly updated, and controls are in effect. Perhaps the single most important factor in prompting the establishment of an effective information security program is commitment from top management. Ultimately, it is top managers who ensure that the agency embraces all elements of good security and who drive the risk management cycle of activity. However, State’s top managers are not demonstrating the commitment necessary to practice good security and State’s information security program does not fully incorporate any of the activities described above. Specifically, there is (1) no central management focal point, (2) no routine process for assessing risks, (3) no comprehensive and current set of written policies, (4) inadequate security awareness among State personnel, and (5) no effective monitoring and evaluation of policies and controls. In addition, State lacks a comprehensive information security plan that would help ensure that these elements are in place. While senior management at State has shown some interest in information security through actions including drafting memoranda, forming working groups to improve information security, and approving limited funding for selected security activities, this interest has not been sufficient to overcome longstanding and institutionalized security weaknesses. For example, while top management at State is aware of longstanding problems associated with its information management and information security and has reported a number of these high-risk and material weaknesses to the President and the Congress under provisions of the 1982 Federal Managers’ Financial Integrity Act, these weaknesses remain unresolved. For example, mainframe computer security was identified as a material weakness 10 years ago but has not yet been corrected. “The lack of senior management’s involvement in addressing authority, responsibility, accountability and policy is the critical issue perpetuating the Department’s lax approach to mainframe security . . . . In addition, the lack of clear management responsibility has resulted in incomplete and unreliable security administration . . . .” Many mid-level State officials told us that the information security problems we and others identified during our review were already known throughout the department. Collectively, they believed that senior State management was not convinced of the seriousness of the problems and were unable or unwilling to commit the requisite attention and resources to resolve them. They noted that budget requests for security measures, such as information systems security officers, were approved but later rescinded. Many officials said that while the assignment of a chief information officer (CIO) was a critical step in elevating the importance of information management and security throughout the department, the CIO does not have the authority needed to ensure that improvements are made throughout State’s decentralized activities. They also said that budgets for important controls, such as Bureau of Diplomatic Security information security evaluations at worldwide posts, are severely constrained and that the same security deficiencies are found and ignored year after year. Other officials reported that State personnel do not carry out their security responsibilities satisfactorily because security is assigned as a low-priority collateral duty. The Department of State is a decentralized organization with bureaus operating semi-autonomously in their areas of responsibility. As a result, information resources management is scattered throughout the department. There is no single office responsible for overseeing the architecture, operations, configuration, or security of its networks and systems. The chief information officer, the Bureau of Diplomatic Security, and the information management office all perform information security functions. Many offices and functional bureaus also manage, develop, and procure their own networks and systems. In addition, according to Bureau of Diplomatic Security officials, some of the approximately 250 posts operated by State around the world have established their own network connections, further complicating security and configuration management. “Since there is no enterprise-wide authority for ensuring the confidentiality, integrity and availability of information as it traverses the unclassified network, it is extremely difficult to detect when information is lost, misdirected, intercepted or spoofed. Therefore, a post that is not expecting to receive information will not miss critical information that never arrives. More importantly, if a post does receive information it was not expecting, there is no office to confirm that the transmission was legitimate and not disinformation sent by a network intruder or disgruntled employee.” In assessing risks, managers should consider the (1) value and sensitivity of the information to be protected, (2) vulnerabilities of their computers and networks, (3) threats, including hackers, thieves, disgruntled employees, competitors, and in State’s case, foreign adversaries and spies, (4) countermeasures available to combat the problem, and (5) cost-effectiveness of the countermeasures. In addition to providing the basis for selecting appropriate controls, results obtained from risk assessments should also be used to help develop and update an organizations’s security plan and policies. We met with representatives from the Office of Information Management and Bureau of Diplomatic Security who told us that they are unaware of any significant risk management activity related to information security within the department. These officials stated that they have not been requested to provide technical assistance to program managers at State. One significant exception to this is the comprehensive risk analysis performed by the Bureau of Diplomatic Security, which evaluated the risks associated with Internet connectivity. Computer security evaluations performed at posts located around the world by Bureau of Diplomatic Security staff further demonstrate that State officials are not addressing and correcting risks appropriately. The evaluations revealed numerous problems at foreign posts such as use of inappropriate passwords and user identifications, failure to designate an information systems security officer, poor or nonexistent systems security training, and lack of contingency plans. Diplomatic security staff also told us that they have found that some posts have installed modem connections and Internet connections without approval, further complicating the department’s ability to manage and secure its networks. Annual analyses of these evaluations show a pattern in which system security requirements are continually overlooked or ignored. Diplomatic security staff noted that the majority of the security deficiencies that they found are correctable with modest capital outlay and more attentive system administration. State’s information security policies are primarily contained in its Foreign Affairs Manual. State also provides policy guidance in other formats, including instructions, cablegrams, letters, and memoranda. These policies are deficient in several respects. First, they fail to acknowledge some important security responsibilities within the department. For example, while the security manual details responsibilities of system managers and information systems security officers, it does not address the information security responsibilities of the Department’s chief information officer (CIO). The CIO’s authority and ability to operate effectively would be enhanced with departmental policy recognition of the legislatively prescribed security responsibilities. State’s Foreign Affairs Manual was updated in February 1997 to describe the CIO position, but it does not discuss any information security responsibilities. Second, the Foreign Affairs Manual does not require and consequently provides no mandate for, or guidance on, the use of risk assessments. As previously discussed, the department does not routinely assess and manage its information security risks. There is no specific State policy requiring threat and vulnerability assessments, despite their known value. Third, State’s policy manual does not sufficiently address users’ responsibilities. For example, the manual does not emphasize that users should be accountable for securing their automated data, much as they are held responsible for securing classified paper documents. And it does not adequately emphasize the importance of information and computer resources as critical assets that must be protected. A significant finding in the department’s Internet risk analysis is that users and even systems administrators “do not feel that their unclassified data is sensitive and therefore spend little to no effort in protecting the data from external disclosure.” Clearly stated policy and effective implementation could contribute greatly to increased awareness. Often, computer attacks and security breakdowns are the result of failures on the part of computer users to take appropriate security measures. For this reason, it is vital that employees who use a computer system in their day-to-day operations be aware of the importance and sensitivity of the information they handle, as well as the business and legal reasons for maintaining its confidentiality and integrity. In accepting responsibility for security, users need to follow organizational policies and procedures, and acknowledge the consequences of security violations. They should also devise effective passwords, change them frequently, and protect them from disclosure. Further, it is important that users not leave their computers, workstations, or terminals unattended, and log out when finished using their computers. In addition, users should help maintain physical security over their assigned areas and computer resources. Many computer users at State had weak passwords that were easily guessed, indicating that they were unaware of or insensitive to the need for secure passwords. During our testing of State’s systems, we were able to guess passwords on a number of machines on various networks using both manual guessing and automated password cracking programs. One way to prevent password guessing is to ensure that users use complex passwords such as those composed of alphanumeric, upper- and lower-case characters. However, there was no evidence that State was training its users to employ these techniques. We also found little evidence that State was training its users to prevent unauthorized access to information. For example, we called a user under the pretense that we were systems maintenance personnel and were able to convince her to disclose her password. We also bypassed physical security at a State facility and searched unattended areas for user account information and active terminal sessions. For example, in several instances we were able to enter a facility without the required State identification by using turnstiles designed for handicapped use. Once inside the facility, we entered unlocked work areas and found unattended personal computers logged onto a local area network. From one of these computers, we downloaded a file that contained a password list. We also noticed that a password and user identification code were taped to the desk in a workstation. Some key controls are not in place at State to ensure that it can defend its sensitive information and systems. For example, State has very little departmentwide capacity to respond to security incidents and individual bureaus currently handle incidents on an ad hoc basis. Problems experienced are not shared across the department because the incidents are not reported or tracked centrally and very little documentation is prepared. Furthermore, State does not regularly test its systems and network access controls through penetration testing. Finally, State has limited ability to visit all its worldwide locations to perform security evaluations. Our study of information security management at leading organizations found that an organization must monitor and evaluate its policies and other controls on a regular basis to periodically reassess whether it is achieving its intended results. Testing the existence and effectiveness of controls and other risk reduction efforts can help determine if they are operating effectively. Over time, policies and controls may become inadequate because of changes in threats, changes in operations, or deterioration in the degree of compliance. Because breaches in information security, computer viruses, and other related problems are becoming more common, an aggressive incident response capability is an important control and a key element of a good security program. Organizations need this capability to respond quickly and effectively to security incidents, help contain and repair any damage caused and prevent future damage. In recognition of the value of an incident response capability, federal agencies are now required by the Office of Management and Budget to establish formal mechanisms to respond to security incidents. Many organizations are now setting up emergency response teams and coordinating with other groups, including the Federal Computer Incident Response Capability and Carnegie Mellon’s Computer Emergency Response Team. Knowing that organizations have a formidable response capability has proved to be a deterrent to hackers and other unauthorized users. State acknowledges that it needs the capability to detect and react to computer incidents and information security threats in a timely and efficient manner. At the time of our review, Department personnel were drafting incident response procedures. Bureau of Diplomatic Security officials told us that they are beginning to develop an incident response capability at the laboratory that they use to evaluate and accredit systems and software. Information management officials also told us that efforts were underway to obtain some services from the Federal Computer Incident Response Capability that would help them detect and react to unauthorized access to their systems. As discussed earlier, Bureau of Diplomatic Security performs evaluations of field locations to identify and make recommendations for correcting security weaknesses. However, Bureau of Diplomatic Security officials told us that budget constraints limit their ability to perform these evaluations and visit all locations on a systematic and timely basis. State officials also told us that they need to periodically assess the vulnerabilities of and threats to their systems. They also acknowledged the need for and importance of developing a reporting mechanism that can be used across the department to share information on vulnerabilities and incidents. An additional control mechanism that could help State ensure that controls are in place and working as intended, and that incident response capability is strong, is the annual financial statement audit. This audit is required to be conducted annually by the Chief Financial Officers Act of 1990. A part of this audit could involve a detailed examination of an agency’s general and application computer controls. We have been working with the department’s inspector general to ensure that State’s financial audit includes a comprehensive assessment of these controls. When this audit is complete, management will be able to better gauge its progress in establishing and implementing sound information security controls. Federal agencies are required by the Computer Security Act to develop and implement security plans to protect any systems containing sensitive data. The February 1996 revision to Appendix III of OMB Circular A-130 requires that a summary of the security plans be incorporated into an agency’s strategic information resources management plan. State has no information security plan. Instead, the department’s IRM Strategic and Performance Management Plan includes several pages of text on information security and its implementation. This discussion highlights the development of computer security and privacy plans for each system containing sensitive information, as required by the Computer Security Act. However, when we requested copies of these individual plans, we were told that they could not be located and that even if they were found, they would be virtually useless because they were drafted in the late 1980s, never updated, and are now obsolete. The strategic plan also references other efforts underway within the department, including assessments of various software applications to identify vulnerabilities and evaluations of antivirus software products. However, this discussion is insufficient. It merely lists a set of ad hoc and largely unrelated programs and projects to improve information security. It does not relate these programs to any risk-based analysis of threats to and vulnerabilities of the department’s networks or systems. Furthermore, this discussion mentions the existence of but does not endorse or discuss planned efforts to implement any key recommendations identified in the Internet Risk Analysis. A companion document to the strategic plan, the department’s February 1997 Tactical Information Resources Management Plan, indicates the lack of emphasis that information security receives. According to this plan, the department should closely monitor and centrally manage all information resource management initiatives that “are critical to the Department missions; will cost more than $1 million through their life cycle; have schedules exceeding one year; and cut across organizational lines.” However, the plan acknowledges that “at this time the Department has no Security projects that meet the criteria” above. In addition, the plan ignores the need for centralized management for information technology projects and, instead, requires individual offices to fund and manage their own security requirements. Internet security was the only area in which we found that State’s controls were currently adequate. We attempted to gain access to internal State networks by going through and around State’s Internet gateways or exploiting information servers from the outside via the Internet, but we were not able to gain access to State’s systems. State’s protection in this area is adequate, in part, because the department has limited its use and access to the Internet. However, State officials have been requesting greater Internet access and the department is considering various options for providing it. Expansion of Internet services would provide more pathways and additional tools for an intruder to attempt to enter unclassified computer resources and therefore increase the risk to State systems. Recognizing this, State conducted an analysis of the risks involved with increasing Internet use. However, the department has not yet decided to what extent it will accept and/or address these new risks. Until it does so and implements a comprehensive security program that ensures that top managers are committed to enforcing security controls and users are fully aware of their computer security responsibilities, State will not be in a good position to expand its Internet use. Networked information systems offer tremendous potential for streamlining and improving the efficiency of State Department operations. However, they also greatly increase the risks that sensitive information supporting critical State functions can be attacked. Our testing demonstrated that State does not have adequate controls to protect its computer resources and data from external attacks and unauthorized activities of trusted users who are routinely allowed access to computer resources for otherwise legitimate purposes. These weaknesses pose serious risk to State information and operations and must be mitigated. We recognize that no organization can anticipate all potential vulnerabilities, and even if it could, it may not be cost-effective to implement every measure available to ensure protection. However, State has yet to take some basic steps to upgrade its information systems security and improve its position against unauthorized access. These steps include ensuring that top managers are fully aware of the need to protect State’s computer resources, establishing a strong central management focal point to remedy the diluted and fragmented security management structure, and addressing the risks of additional external connectivity before expanding its Internet usage. Until State embraces these important aspects of good computer security, its operations, as well as those of other federal agencies that depend on State, will remain vulnerable to unauthorized access to computer systems and data. We reaffirm the recommendations we made in our March 1998 classified report. These recommendations called on State to take the following actions. Establish a central information security unit and assign it responsibility for facilitating, coordinating, and overseeing the department’s information security activities. In doing so, assign the Chief Information Officer the responsibility and full authority for ensuring that the information security policies, procedures, and practices of the agency are adequate; clarify the computer security responsibilities of the Bureau of Diplomatic Security, the Office of Information Management, and individual bureaus and diplomatic posts; and consider whether some duties that have been assumed by these offices can be assigned to, or at a minimum coordinated with, the central information security unit. Develop policy and procedures that require senior State managers to regularly determine the (1) value and sensitivity of the information to be protected, (2) vulnerabilities of their computers and networks, (3) threats, including hackers, thieves, disgruntled employees, foreign adversaries, and spies, (4) countermeasures available to combat the problem, and (5) cost-effectiveness of the countermeasures. Revise the Foreign Affairs Manual so that it clearly describes the legislatively-mandated security responsibilities of the Chief Information Officer, the security responsibilities of senior managers and all computer users, and the need for and use of risk assessments. Develop and maintain an up-to-date security plan and ensure that revisions to the plan incorporate the results obtained from risk assessments. Establish and implement key controls to help the department protect its information systems and information, including periodic penetration testing to identify vulnerabilities in State’s assessments of the department’s ability to (1) react to intrusion and attacks on its information systems, (2) respond quickly and effectively to security incidents, (3) help contain and repair any damage caused, and (4) prevent future damage, and central reporting and tracking of information security incidents to ensure that knowledge of these problems can be shared across the department and with other federal agencies. Ensure that the results of the annual financial statement audits required by the Chief Financial Officers Act of 1990 are used to track the department’s progress in establishing, implementing, and adhering to sound information security controls. Require department managers to work with the central unit to expeditiously review the specific vulnerabilities and suggested actions we provided to State officials at the conclusion of our testing. After the department has reviewed these weaknesses and determined the extent to which it is willing to accept or mitigate security risks, assign the central unit responsibility for tracking the implementation and/or disposition of these actions. Direct the Assistant Secretary for Diplomatic Security to follow-up on the planned implementation of cost-effective enhanced physical security measures. Defer the expansion of Internet usage until (1) known vulnerabilities are addressed using risk-based techniques and (2) actions are taken to provide appropriate security measures commensurate with the planned level of Internet expansion. The Department of State provided written comments on a draft of our classified report and concurred with eight of our nine recommendations. In summary, State said that its Chief Information Officer is beginning to address the lack of a central focus for information systems security through the establishment of a Security Infrastructure Working Group; agreed to formalize and document risk management decisions; agreed to revise provisions of the Foreign Affairs Manual related to information security and undertake an evaluation of one of its most significant networks based on our review; and said it is implementing a plan to correct the technical weaknesses identified during our testing. State also took steps to minimize unauthorized physical access to a State facility. State did not concur with our recommendation to defer the expansion of Internet usage. In explaining its nonconcurrence, State asserted that expanded use of Internet resources is a priority; the Chief Information Officer, Office of Information Management, and Bureau of Diplomatic Security are coordinating on architecture and security functionality that should mitigate any significant security vulnerabilities through the use of a separate enclave; segmenting the network, implementing controlled interfaces, restricting services, restricting the processing or transmission of sensitive unclassified information, and proactive network monitoring and incident handling should mitigate these risks; and a formal risk analysis of expanding the Internet throughout the department has been conducted and known risk factors are being considered in the Internet expansion. Some of these assertions are invalid; the rest do not fully address our recommendation. First, designating expanded Internet usage as a priority does not mean that State should proceed before it fully implements appropriate security controls. If State expands Internet connectivity without effectively mitigating the significant additional risks that entails, it will increase its already serious vulnerabilities to individuals or organizations seeking to damage State’s operations, commit terrorism, or obtain financial gain. Second, State does not explain how “coordination on architecture and security functionality” between the Chief Information Officer, Office of Information Management, and Bureau of Diplomatic Security will reduce Internet risks, including computer attacks from those wishing to steal information or disable the department’s systems. As noted in this report, the organizations cited by State share various information security responsibilities, but have different missions and interests. This assertion does not address our recommendation that State establish an organization unit with responsibility for and authority over all information security activities, including protecting the department from computer attacks via Internet. Third, State identified a number of controls with it believes will reduce Internet security risks, including establishing a (logically) separate network (enclave) dedicated to Internet usage, and proactively monitoring the network and handling incidents. If effectively implemented and maintained, these measures can help reduce security risks. However, State did not specify how it planned to implement these controls, what resources it has allocated to these efforts, or if they would be completed before State expands its Internet usage. Our point is that State must actually implement and maintain security measures to mitigate these risks prior to increasing Internet usage. Finally, we discussed State’s risk analysis of expanded Internet usage in our report. This analysis identifies numerous risks associated with expansion and options for addressing them. It is not sufficient that “known risk factors are being considered in the Internet expansion”; as previously noted, State must mitigate these risks prior to increasing Internet usage. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies of this report to the Chairman and Ranking Minority Members of the House Government Reform and Oversight Committee, Senate Committee on Appropriations, Subcommittee on Commerce, Justice, State, the Judiciary and Related Agencies, the House Committee on Appropriations, Subcommittee on Commerce, Justice, State, the Judiciary and Related Agencies, and the Secretary of State. Copies will be available to others upon request. If you have questions about this report, please contact me at (202) 512-6240. Major contributors are listed in appendix I. Keith A. Rhodes, Technical Director John B. Stephenson, Assistant Director Kirk J. Daubenspeck, Evaluator-in-Charge Patrick R. Dugan, Auditor Cristina T. Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) how susceptible the Department of State's unclassified automated information systems are to unauthorized access; (2) what State is doing to address information security issues; and (3) what additional actions may be needed to address the computer security problem. GAO noted that: (1) State's information systems and the information contained within them are vulnerable to access, change, disclosure, disruption or even denial of service by unauthorized individuals; (2) GAO conducted penetration tests to determine how susceptible State's systems are to unauthorized access and found that it was able to access sensitive information; (3) moreover, GAO's penetration of State's computer resources went largely undetected, further underscoring the department's serious vulnerability; (4) the results of GAO's tests show that individuals or organizations seeking to damage State operations, commit terrorism, or obtain financial gain could possibly exploit the department's information security weaknesses; (5) although State has some projects under way to improve security of its information systems and help protect sensitive information, it does not have a security program that allows State officials to comprehensively manage the risks associated with the department's operations; (6) State lacks a central focal point for overseeing and coordinating security activities; (7) State does not routinely perform risk assessments to protect its sensitive information based on its sensitivity, criticality, and value; (8) the department's primary information security policy document is incomplete; (9) the department lacks key controls for monitoring and evaluating the effectiveness of its security programs and it has not established a robust incident response capability; (10) State needs to greatly accelerate its efforts and address these serious information security weaknesses; (11) however, to date, its top managers have not demonstrated that they are committed to doing so; (12) Internet security was the only area in which GAO found that State's controls were currently adequate; (13) however, plans to expand its Internet usage will create new security risks; (14) State conducted an analysis of the risks involved with using the Internet more extensively, but has not yet decided how to address the security risks of additional external connectivity to the concerns this review has raised; and (15) if State increases its Internet use before instituting a comprehensive security program and addresses the additional vulnerabilities unique to the Internet, it will unnecessarily increase the risks of unauthorized access to its systems and information.
The Results Act is the centerpiece of a statutory framework to improve federal agencies’ management activities. The Results Act was designed to shift the focus of attention of federal agencies from the amount of money they spend, or the size of their workloads, to the results achieved by their programs. Agencies are expected to base goals on their results-oriented missions, develop strategies for achieving their goals, and measure actual performance against the goals. The Results Act requires agencies to consult with the Congress in developing their strategic plans. This consultation gives the Congress the opportunity to help ensure that the agencies’ missions and goals are focused on results, are consistent with the programs’ authorizing laws, and are reasonable in light of fiscal constraints. The products of these consultations are to be clearer guidance to agencies on their missions and goals, which should lead to better information to help the Congress choose among programs, consider alternative ways to achieve results, and assess how well agencies are achieving them. The Results Act required SBA and other executive agencies to complete their first strategic plans and submit them to the Congress and OMB by September 30, 1997. The act also requires that agencies submit their first annual performance plans, which set out measurable goals that define what will be accomplished during a fiscal year, to the Congress after the President submits his fiscal year 1999 budget to the Congress. OMB requested that agencies integrate, to the extent possible, their annual performance plans into their fiscal year 1999 budget submissions. OMB, in turn, is required to include a governmentwide performance plan in the President’s fiscal year 1999 budget submission to the Congress. SBA’s September 30, 1997, strategic plan is an improvement over the March 5, 1997, version of the plan. The September plan includes the two required elements that were lacking in the March version. First, the September plan includes a section on how program evaluations were used to develop the plan and mentions some specific evaluations that SBA plans in the future, such as those for business information centers. Second, it includes a section entitled “Linkages to Annual Performance Plans” that recognizes the need to link (1) the strategic goals in the plan to annual performance goals and (2) SBA’s annual budget submissions to annual performance goals. In addition, the five goals in the September plan—which are to (1) increase opportunities for small business success, (2) transform SBA into a 21st century, leading edge financial institution, (3) help businesses and families recover from disasters, (4) lead small business participation in welfare-to-work, and (5) serve as the voice of America’s small businesses—are, as a group, more clearly linked to SBA’s statutory mission than the goals in the March version of the plan. Also, the inclusion of date-specific performance objectives to help measure performance makes the strategic goals and objectives in the September plan more amenable to a future assessment of SBA’s progress. For example, Under the goal of increasing opportunities for small business success, one of SBA’s performance objectives is as follows: “By the year 2000, SBA will help increase the share of federal procurement dollars awarded to small firms to at least 23 percent.” Under the goal of transforming SBA into a 21st century, leading edge financial institution, one of SBA’s performance objectives is as follows: “By the year 2000, SBA will expand the Chief Financial Officer (CFO) annual financial audit to include a separate opinion on whether SBA’s internal control structure meets Committee of Sponsoring Organizations (COSO) of the Treadway Commission standards for financial reporting. By the year 2002, SBA will receive an unqualified opinion on its internal control structure for financial reporting.” SBA also improved its strategic plan by more clearly and explicitly linking the strategies in the September plan to the specific objectives that they are intended to achieve. Also, some of the strategies are more detailed and more clearly indicate how they will enable SBA to accomplish its goals and objectives. For example, under the objective of “implementing effective oversight” of lenders and other resource partners, SBA’s strategies include (1) establishing loan program credit, service, and mission standards to measure lenders’ performance and (2) developing a scoring system, based on objective criteria, that measures and determines whose performance is consistent with the laws and regulations governing SBA programs. Furthermore, certain strategies recognize the crosscutting nature of some activities; for example, a strategy for achieving SBA’s strategic goal to “help businesses and families recover from disasters” is to combine SBA’s home loss verification with that of the Federal Emergency Management Agency’s home inspections. We also observed certain other changes which we believe have improved SBA’s strategic plan: The mission statement in SBA’s September plan appears to incorporate observations we made in our July report: It is concise and reflects SBA’s key statutory authorities of aiding, counseling, and assisting small businesses and of providing disaster assistance to families and businesses. In general, the September plan does a better job of recognizing that SBA’s success in achieving certain goals and objectives in its plan is dependent on the actions of others. For example, one of the strategies under the objective “expanding small business procurement opportunities” calls for SBA to “work with other federal agencies to set higher small business procurement goals and assist these agencies in meeting those goals.” SBA significantly improved its September plan by more clearly and explicitly linking performance measures to the specific objectives that they are intended to assess. Performance measures are directly linked to 11 of the 14 performance objectives in the plan. An exception is SBA’s fifth goal of serving as a voice for America’s small business, where the performance measures are listed as a group at the end of the discussion of the goal’s three objectives. While SBA’s September 30, 1997, strategic plan is an improvement over the March 1997 version that we reviewed, we believe that further revisions to the plan as SBA continues to implement the Results Act and build on current efforts would enable SBA’s plan to better meet the purposes of the Results Act. As noted earlier, while the five goals in the September plan are more clearly linked to SBA’s statutory mission, the relationship of one goal—leading small businesses’ participation in the welfare-to-work effort—to SBA’s mission is unclear. While the performance objective for this goal places emphasis on helping small businesses meet their workforce needs, the subsequent discussion implies a focus on helping welfare recipients find employment; for example, the plan states that “SBA’s goal is to help 200,000 work-ready individuals make the transition from welfare to work . . . .” It is not clear in the plan why SBA is focusing on welfare recipients and not on other categories of potential employees to help meet small businesses’ workforce needs. Under the Results Act, strategy sections in the strategic plans are to briefly describe items, such as the human, capital, information, or other resources needed to achieve goals and objectives. The strategy sections in SBA’s September plan lack such a discussion. At the same time, the plan recognizes the need for information on resources needed to achieve the goals and objectives, and states that accountable program managers will develop an annual business plan that contains a set of program activities, milestones, and resources for each objective and strategy in the plan. The Results Act requires that strategic plans include a schedule of future program evaluations. SBA’s plan mentions certain program evaluations planned by SBA for future fiscal years; for example, the plan states that in fiscal 1998, SBA will (1) assess the results of counseling services provided by two pilot Women Business Centers and (2) conduct an assessment of the effectiveness and efficiency of existing United States Export Assistance Centers. The plan also states that SBA will continue its goal monitoring of field and headquarters offices. However, the September plan does not contain schedules of future comprehensive program evaluations for SBA’s major programs, such as the 7(a) loan program, which is SBA’s largest small business lending program, and the 8(a) business development program, which supports the establishment and growth of small firms by providing them with access to federal procurement opportunities. In addition, while SBA acknowledges in the September plan that it needs a more systematic approach for using program evaluations for measuring progress toward achieving its goals and objectives, the plan does not outline how SBA will develop and implement such an approach. It should be noted that the IG’s plan references future audits and evaluations that the IG plans to conduct as part of its effort to improve SBA’s management. Under OMB’s Circular A-11, strategic plans are to briefly describe key external factors and how each factor may influence achievement of the goals and objectives. A section added to the September plan identifies four external factors—the state of the economy, continued congressional and stakeholder support, public-private cooperation, and interagency coordination—that could affect the achievement of the plan’s goals. However, with the exception of the “interagency coordination” factor, the plan does not link these factors to particular goals or describe how each could affect achievement of the plan’s goals and objectives. Also, the plan does not articulate strategies that SBA would take to mitigate the effects of these factors. The added section also discusses how SBA’s programs and activities interact with other federal agencies’ programs and activities. While SBA states that it will work with other federal agencies to coordinate its activities, the section does not provide evidence that SBA coordinated with the other agencies in the plan’s development. The September plan, while recognizing the need for reliable information to measure progress toward the plan’s goals and objectives, notes that SBA currently does not collect or report many of the measures that it will require to assess performance. The plan would benefit from brief descriptions of how SBA plans to collect the data to measure progress toward its goals and objectives. Similarly, a section in the September plan discusses SBA’s efforts to improve internal controls and to obtain an unqualified opinion on its internal control structure for financial reporting by the year 2002. While this section implies that SBA will address management problems identified by GAO and others, such as SBA’s failure to reconcile certain fund balances with those of the Department of Treasury and the problem of overvalued or nonexistent collateral on liquidated 7(a) loans, specific strategies to address the identified management problems are not described. Unlike the March version that we reviewed, SBA’s September plan includes, as appendices, separate strategic plans for SBA’s Office of Inspector General (IG) and Office of Advocacy. In the March version of the plan, the IG material was presented under one of the plan’s seven goals, while the Office of Advocacy material did not appear at all. Generally, the goals and objectives in the IG and Advocacy plans appear consistent with, and may contribute to the achievement of, the goals and objectives in SBA’s plan, but the relationship is not explicit. SBA’s plan makes little mention of the IG and Advocacy plans and does not indicate if or how the IG and Advocacy activities are intended to help SBA achieve the agency’s strategic goals. Similarly, the IG and Advocacy plans do not make reference to the goals and objectives in the SBA plan. These plans could be more useful to decisionmakers if their relationships were clearer. In summary, SBA has made progress in its strategic planning efforts, based in part on its consultation with the Congress. As I noted earlier, SBA’s September 1997 strategic plan includes several improvements that make it more responsive to the requirements of the Results Act. However, as is the case with many other agencies, SBA’s development of a plan that conforms to the requirements of the Results Act and to OMB’s guidance is an evolving process. As my testimony notes, there are still several areas where improvements need to be made to SBA’s strategic plan in order to meet the purposes of the Results Act. This concludes my statement. I would be pleased to respond to any questions you or members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the September 30, 1997 strategic plan developed by the Small Business Administration (SBA), pursuant to the Government Performance and Results Act. GAO noted that: (1) SBA's plan represents an improvement over its March 1997 version; (2) the plan contains the six elements required by the Results Act; (3) the strategic goals, as a group, are more clearly linked to SBA's mission and are more amenable to measurement; (4) the strategies and performance measures are more clearly linked to the objectives that they are intended to achieve and measure; (5) other improvements in the plan encompass a mission statement that now includes the disaster loan program for families and more accurately reflects SBA's statutory authorities and a better recognition that SBA's success in achieving certain goals and objectives in the plan is dependent on the actions of others; (6) an additional section discusses how SBA's programs and activities interact with those of other federal agencies; (7) the plan could be further improved to better meet the purposes of the Results Act; (8) the relationship of one of the plan's goals, leading small business participation in the welfare-to-work effort, to SBA's mission is unclear; (9) the plan does not discuss the human, capital, and other resources needed by SBA to carry out the strategies identified in the plan; (10) the plan does not include comprehensive schedules of future program evaluations for major SBA programs; (11) the plan does not consistently link identified external factors to the particular goal or goals they could affect or describe how how each factor could affect the achievement of the goal; (12) in a departure from its March version, SBA's plan includes as appendices separate strategic plans for SBA's Office of Inspector General and Office of Advocacy; and (13) the relationship between the goals and objectives in the plans included in the appendices and those in SBA's plan is not explicit.
CS plays a major role in U.S. export promotion activities as the primary agency providing export assistance to individual businesses, especially small- and medium-sized businesses. It is a unit of the Department of Commerce’s (Commerce) International Trade Administration (ITA), and its services include the following: Counseling and intelligence. CS assists U.S. businesses in understanding foreign markets and developing export marketing plans including overseas product pricing, best prospects, market entry strategies, and distribution channels, and facilitates access to export financing and public and private export promotion assistance. Matchmaking. CS organizes and participates in trade events and forums, and introduces U.S. businesses to qualified overseas agents, distributors, end users, and other partners. Advocacy services. CS alerts U.S. firms to major overseas projects and procurement, and advocates on behalf of U.S. firms bidding on projects. CS has about 300 staff members in U.S. Export Assistance Centers (USEAC) located throughout the United States who work with firms that are new to exporting or want to expand their exporting efforts. USEACs provide counseling and planning services, such as working with firms to select target markets and develop marketing plans. They also coordinate with CS posts overseas, which provide matchmaking, advocacy, and market intelligence services in the target markets. About 1,000 CS international field staff—made up of FSOs and LES—are located at posts around the world to provide these services. LES are generally natives of the countries in which they are located, making them well-suited to help U.S. companies make local connections. USEAC staff and overseas post staff are supported by about 180 staff at headquarters in Washington, D.C. For the purposes of this report we reviewed CS activities related to providing information, counseling, and assistance for exports of services and manufactured goods, which is CS’s main focus. USDA and State also conduct export promotion activities. USDA’s Foreign Agricultural Service focuses on promoting commodities produced by U.S. farmers and ranchers. State supports CS’s efforts in countries where CS does not have a presence (see app. II). However, U.S. export promotion activities are multifaceted and also include reducing trade barriers, government-to- government advocacy, financing and other monetary assistance, and other activities. U.S. plans to increase exports are generally articulated in the Trade Promotion Coordinating Committee’s National Export Strategy, which is issued annually. Established in 1993, the Trade Promotion Coordinating Committee is an interagency group established to provide a framework to coordinate the export promotion and export financing activities of the U.S. government. As of 2010, 20 U.S. agencies have a role in export promotion. However, we have reported for a number of years that the annual National Export Strategies have limitations that affect the Coordinating Committee’s ability to coordinate trade promotion activities. For example, National Export Strategies provide limited information on member agencies’ goals and progress, relative to broad national priorities, to guide future efforts. Another effort is under way that could facilitate better interagency coordination. On January 27, 2010, President Obama announced the NEI in an effort to support U.S. economic recovery following the recession. A newly created Export Promotion Cabinet that reports to the President will coordinate and implement the goals of the NEI, and is expected to deliver a report to the President with a plan to implement the goals of the NEI in September 2010. While many aspects of the NEI are still in the planning stages, the NEI will have three main components: Educating U.S. companies about export opportunities; directly connecting them with new customers, partners, and distributors overseas; and advocating for their interests. Providing access to credit through the Export-Import Bank of the United States with a special focus on small- and medium-sized businesses. Removing trade barriers. The first component of the NEI, educating U.S. companies about export opportunities, will be carried out in large part by the Department of Commerce through CS. The majority of CS’s costs are related to the personnel that staff CS’s headquarters and its domestic and overseas export assistance posts. According to CS, 60 percent of its budget in 2009 was associated with personnel costs, including salaries, benefits, and FSO support costs for officers posted overseas. FSO support costs include relocation, travel, training, home leave allowances, and shipping and storage of household goods. Administrative payments to State associated with having personnel stationed overseas, including International Cooperative Administrative Support Services (ICASS) and Capital Security Cost Sharing Program (CSCSP) charges, as well as payments to ITA, CS’s parent agency, for shared overhead totaled 29 percent. The remaining 11 percent included CS-specific overhead costs including rent, communications, utilities, program-related travel, supplies, printing, and equipment costs as well as costs for developing and enhancing software for CS worldwide. CS charges fees for export promotion services that benefit individual companies, such as connecting them with potential buyers and distributors. CS uses the fees it collects to cover the costs of the related program expenses. CS reported that it collected approximately $10 million in fees in 2008. CS leadership lacked systematic information about CS’s workforce, and did not fully recognize or adequately respond to program risks created by growing administrative costs and declining staff levels from 2004 to 2009. Management control standards require entities to ensure that program managers have systems to provide needed operational and financial information in a timely manner to carry out their management and oversight responsibilities. In the case of CS, the most important management responsibilities were related to workforce decisions, since workforce expenses are the largest portion of CS’s budget. Additionally, these standards require management to identify and analyze the relevant risks an agency faces from internal and external sources so it can proactively manage them. From 2004 to 2009, CS’s budgets remained essentially flat as per capita personnel costs and administrative costs increased. Although CS leadership was aware of this trend, they did not have processes in place to analyze and respond to the long-term financial implications of these costs on CS’s workforce. Additionally, CS was not fully aware of the costs associated with positions it maintained in U.S. embassies that were vacant but not officially eliminated and did not take steps that would have saved money on them. As CS’s financial constraints grew, officials delayed their impact through a variety of financial management practices such as using unobligated funds from prior years’ appropriations. However, as the availability of these offsetting funds declined and costs continued growing, CS leadership failed to recognize the risks entailed by the financial problems, and the organization reached a “crisis” situation in 2009. Officials froze hiring, travel, training, and supplies, compromising CS’s ability to conduct its core business. CS’s workforce declined by about 240 staff from its peak level in 2004 through attrition—affecting the mix and distribution of personnel. From 2004 to 2009, CS’s budgets remained essentially flat, while at the same time the agency faced increasing per capita personnel costs. CS appropriations grew about 1.9 percent on average per year during that time frame. This represented a total increase of about 10 percent from 2004 to 2009, as shown in table 1. Although CS’s budget was adjusted for inflation and other increases such as pay raises and changes in benefit contribution rates, its annual increases did not cover its full costs, according to CS officials. For example, administrative costs grew from 20 percent ($44 million) of its obligations in 2004 to 30 percent ($72 million) in 2009. CS officials calculated the total cumulative funding gap for 2004 to 2009—the difference between its annual appropriations and full costs—to be $24 million. Since 2004, CS faced increased administrative payments to State that consumed larger shares of its funds. One factor increasing costs was State’s CSCSP, which began in 2005 to support the building of new secure embassies and consulates. The fee was phased in over a 5-year period, with the full annual charge levied for the first time in 2009. In 2005, the fee for CS was about $3.1 million, and in 2009 it was $23.7 million. Other factors included increasing costs for personnel, benefits, and rent, which include adjustments for inflation. CS made up the difference by not filling positions. CS budget requests were based partly on an estimated number of FTEs. CS overestimated the number of FTEs it would support with its budget every year, as shown in table 2. The actual numbers averaged 142 FTEs, or 11 percent, less than their estimates. CS used the funds it was provided based on these overestimated FTE levels to pay expenses such as administrative costs. The fact that this happened over so many years indicates that CS did not fully recognize how its workforce was being affected by increasing administrative costs. CS leadership had information about growing costs as early as 2006, but they did not recognize the severity of the situation. CS had data about the growth in unfunded adjustments to base, which are essentially the difference between what CS needed in terms of funding to cover increased costs and the appropriation it received. However, CS leaders said they were not fully aware of the long-term financial and workforce implications of increasing costs until 2009 when CS switched to a new financial management system, which according to CS officials illuminated how little discretionary funding was available. In 2009, they undertook an exercise to analyze and more fully understand the costs that affected the CS budget. Additionally, CS lacks an automated workforce information system to provide up-to-date staffing information, which also has financial implications. For example, without this information CS could not easily review ICASS and CSCSP charges in order to confirm whether State’s charges for these activities were correct. Currently, CS officials compile information from quarterly reports supplied by the posts to determine staffing totals. The lack of risk assessment and the lack of workforce information are both management control weaknesses. CS has incurred costs for administrative payments related to overseas staff that officials consider to be “fixed costs,” but which can be reduced by eliminating vacant positions, downsizing, or eliminating offices. For example, CS incurs CSCSP charges for positions officially established at overseas posts, regardless of whether there is a person in the position. CS has access to information on the number of positions at its overseas posts through State’s Executive Agency Personnel Support system, which tracks the number of positions at posts. The information is used to determine the number of positions CS is paying for including the number of vacant positions at posts. However, Commerce officials indicated they find the system difficult to use, and they do not use the information to manage their overseas workforce. The only way to eliminate a CSCSP charge is to officially eliminate the position. In 2010, there were about 200 unfilled positions at posts incurring CSCSP charges that CS did not eliminate. Charges for vacant positions cost CS approximately $2 million annually, according to a CS budget official. The last time CS eliminated a significant number of vacant positions was in 2004. According to a senior CS official, CS recognized that State’s CSCSP charge would put a cost on every overseas position whether or not it was filled, so in 2004 before the new charge was implemented, the Office of International Operations reviewed its global presence and eliminated all non-essential positions. CS may also have incurred ICASS charges for vacant positions. To avoid ICASS charges for vacant positions, CS must inform State that the position will not be filled in the upcoming fiscal year. Regarding office closures, CS last eliminated offices under its Transformational Commercial Diplomacy Initiative, which was planned beginning in 2006 and implemented starting in 2007. The initiative was focused both on realigning CS’s efforts with its mission to focus on emerging markets, and on rightsizing its operations. Under the initiative, 23 offices were closed, mainly in Europe, and 8 offices were supposed to be opened. However, as of the end of 2009, only 3 of the 8 offices were open and staffed. Several financial management practices temporarily helped mitigate CS’s growing funding constraints. These included using unobligated funds from prior years’ appropriations, redistributing centralized costs to other ITA units, and redirecting user fees to ensure CS did not spend more funds than were authorized. First, CS used its balance of unspent funds from prior years’ unobligated appropriations to cover funding shortfalls. However, Congress changed CS’s appropriations from no-year funding to 2-year funding in 2006. Whereas CS obligated $12 million in unspent funds in 2004, in 2009 it obligated only $5 million, as unobligated funds from prior years were depleted and some funds were no longer available. Second, ITA officials told us they attributed some centralized costs that would have been charged to CS to other ITA programs in order to help CS with its financial problems. For example, in 2009, ITA redistributed $3 million of CS’s centralized costs to other ITA units (Market Access and Compliance, Manufacturing and Services, and Import Administration) to assist CS. Centralized costs include headquarters rent, utilities, information technology support, and secretarial travel. ITA normally apportions these costs based on individual staffing levels. Third, CS fee collections were an additional source of revenue that CS used to address its resource constraints. CS obligations of fees and reimbursements averaged $9.8 million a year from 2004 through 2009. CS’s domestic and overseas offices create surplus fees when they charge exporters for services, and funds remain after the bills associated with the service are paid. Historically, surplus service fees were used at the location where they were earned to pay for program activities in support of CS’s mission such as travel with an ambassador to another city to promote exports. As funding constraints increased, CS management began centrally controlling these fees, requiring posts to seek permission to use them. CS officials told us they took control over these surplus fees to ensure they would not spend more money than Congress authorized and violate the Antideficiency Act. Once the growth in costs reached what CS officials characterized as a “crisis” situation, CS took a number of actions such as imposing a hiring freeze in 2008 and 2009. CS also cut travel funds by 28 percent from 2008 to 2009, and saved money by asking FSOs to stay in their current locations rather than relocating to them to new posts. CS also cut training and supplies. According to a senior CS official, currently CS has no discretionary travel or training funds. Although CS took actions to mitigate the impact of increasing costs as noted above, they were not timely and reflected management control weaknesses. These weaknesses include the lack of a process for promptly identifying risks as they emerge and lack of analysis of the possible effect these mitigating actions could have on CS’s ability to effectively and efficiently carry out its operations. CS did not identify a long-term sustainable solution to the change in its financial situation. Staff in both the domestic and the foreign field offices commented in a 2009 assessment of their operations that staff shortages and budget constraints, including a lack of travel funds, compromised CS’s ability to conduct its core business. In the domestic field offices, staffing shortages and budgetary constraints were mentioned as weaknesses or threats in seven of eight regions. One domestic region stated, “With a hiring freeze in place and severe budget limitations, current vacancies cannot be filled in USEACs on a timely basis. As well, important travel necessary to reach clients/partners, engage in professional development, and lead efforts on trade missions and at trade shows cannot be funded.” Likewise, all six regions overseas indicated that lack of resources was a weakness, and four of the six identified staffing shortages as a problem. For example, one overseas region stated that “budget limitations constrict the extent to which posts can travel, which directly impacts their ability to find and assist clients.” Additionally, the capacity to keep up with ever-growing demand for services was mentioned as a problem by some domestic and overseas locations. As a result of CS’s flat budget, the size of its workforce declined through attrition from 2004 to 2009, and the composition and location of personnel shifted. During this period, CS’s workforce declined by over 200 staff, from 1,731 to 1,492. The number of FSOs declined by 5 percent, LES by 14 percent, and civil servants by 18 percent (see table 3). The number of staff in foreign field offices declined by 12 percent, in domestic field offices by 9 percent, and at headquarters by 28 percent (see table 4). Although CS is taking steps to rebuild its workforce, it lacks key elements in its workforce planning, and its 2011 budget request has some weaknesses that could affect its ability to meet its goals. In 2010, CS received an appropriation of $258 million, of which CS planned to use $5.2 million to begin reversing CS staffing declines. In addition, the President’s 2011 budget asks for a major CS staffing increase. The 2011 budget requests $321 million for CS, $63 million more than its 2010 appropriation. Although CS began the process of reversing its previous years’ staffing declines through these funding increases, we found that CS has not been following workforce planning principles and lacks current workforce plans for utilizing the new staff. CS’s understaffed Office of Foreign Service Human Resources and the long lead time needed to hire and train FSOs could delay staffing increases. Additionally, we found that its budget development methodology was sound in many respects, but had a few weaknesses that could affect CS’s ability to meet its goals, such as not assessing potential risks of estimated costs, which if overly optimistic could lead to cost overruns. CS is rebuilding its workforce and taking other measures to fulfill its major role in implementing the NEI in 2011. Commerce, through the Trade Promotion Coordinating Committee, leads the administration’s trade promotion efforts and will “operationalize” the NEI, according to the Secretary of Commerce. To that end, the Secretary indicated that with the additional resources requested in 2011, ITA expects to hire new trade experts—mostly in foreign countries—to advocate and find customers for U.S. companies, allowing CS to help more than 23,000 clients to begin or grow their export sales in 2011. Additionally, CS will focus on increasing the number of small- and medium-sized businesses exporting to more than one market by 50 percent over the next 5 years. CS already began the process of rebuilding its workforce by designating $5.2 million of its 2010 appropriations to expand its presence in critical emerging markets. CS planned to use the funds to develop a more robust presence in challenging and developing markets in Africa, Eastern Europe, and Asia, where its presence was limited. CS projected hiring a total of 30 new positions in 2010—8 FSOs and 22 LES. In April 2010, CS approved 17 hiring freeze exemptions. It extended offers to 14 certified applicants; 11 individuals accepted the offers, according to a CS official. CS hopes to bring them on in August 2010 and anticipates they will fill vacancies in domestic and overseas locations. These individuals are filling positions created by retirements and attrition that occurred in 2008 and 2009. CS also expects that at least another 7 current officers will leave the service in 2010. CS may fill those potential vacancies as it completed the process of identifying and creating a list of certified applicants, on July 12, 2010. However, a senior CS official noted that rather than using funds to hire people in 2010, CS is focused on creating more exports sooner by increasing marketing, the number of companies going on trade missions, the number of potential trade partners brought to the United States on reverse trade missions, and matchmaking efforts. The rationale was to focus on activities that could provide quick results, according to CS officials, as it takes about 18 months to prepare a company to export, whereas it takes about 6 to 9 months to assist a company that has already exported to one market with exporting to a second market. CS requested a major staffing increase for 2011, seeking to hire a total of 268 staff in support of the NEI. CS plans to hire 130 FSOs and civil servants, a 20 percent increase over its 2010 level of 659 staff in these categories. Additionally, CS plans to hire 138 LES, a 17 percent increase over its 2010 level of 795 staff. The requested increase would reverse the 239-person decline in CS’s overall staffing that occurred from 2004 to 2009. Table 5 identifies the staff CS lost over the past 5 years and the number of staff it plans to hire in 2011. Whereas staffing declines overseas for both FSOs and LES may be addressed if the budget request is approved, there will still be fewer staff compared with 2004 in Washington, D.C. and the domestic field offices, which are generally staffed by civil servants. Additionally, the 63 FSOs and 84 civil servants who are eligible to retire as of March 2010 may not be replaced by the staff requested in the 2011 budget request. Another factor affecting overseas staffing is the use of FSOs in domestic positions. FSOs are sometimes assigned to work in USEACs, serve in multilateral development banks, or work in headquarters (see table 6). For example, in the fourth quarter of 2009, 23 percent of 233 FSOs were in domestic positions, with 27 FSOs specifically in USEACs. CS expects FSOs to serve a 2-year assignment at a USEAC, usually within the first 7 years of their service. CS believes that what FSOs learn in their domestic rotations improves their ability to serve clients overseas. Although CS requested a significant increase in funding to hire new staff in the 2011 budget request, it has not followed key principles in workforce planning to guide its use of these staff. Strategic workforce planning addresses two critical needs: first, aligning an organization’s human capital program with its current and emerging mission and programmatic goals, and second, developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. While agencies’ approaches to workforce planning vary, the five key principles that strategic workforce planning should address irrespective of the context in which the planning is done are (1) setting strategic direction, (2) conducting workforce needs analysis, (3) developing workforce strategies to fill the gaps, (4) evaluating and revising strategies, and (5) involving management and employees throughout the process. We focused our review on steps one through three, because it was premature for us to evaluate steps four and five given the status of CS’s efforts during our review. CS executive- level leadership was new and was just beginning to assess CS operations. Until recently, CS lacked executive leadership and strategic direction in its workforce planning efforts because of key vacancies between administrations. One of the lessons from our prior work on human capital issues is the importance of having leadership that is clearly and personally involved in strategic workforce planning and provides organizational vision in times of change. Effective organizations integrate human capital approaches as strategies for accomplishing their mission. They stay alert to emerging mission demands and human capital challenges and remain open to reevaluating their human capital practices in light of their demonstrated successes or failures in achieving the organization’s strategic objectives. According to a senior CS official, the lack of political leadership hampered efforts to analyze and make decisions regarding the organization’s longer term workforce needs and to ensure its ability to undertake its mission and achieve its goals. Instead, CS’s recent workforce planning efforts have primarily focused on short-term responses to its constrained budget situation, such as not hiring new staff, and extending FSO tours at some posts to avoid the cost of moving them to different posts. While ITA and CS experienced key leadership vacancies for more than a year, CS now has new executive-level leaders who are focused on determining CS’s direction and resource needs. In February 2010, the new Assistant Secretary for Trade Promotion and Director General of the U.S. and Foreign Commercial Service was confirmed, and in March 2010, the new Undersecretary for International Trade was sworn in. In addition, Commerce announced the creation of a new Director’s position to coordinate and direct the Department’s NEI efforts, filling the position in April 2010. The new Assistant Secretary is currently reviewing all of CS’s budgets, activities, and personnel to determine what its structure should be to accomplish its goals, including those of the NEI. CS has also lacked a clear sense of strategic direction. For example, the Trade Promotion Coordinating Committee did not issue a 2009 National Export Strategy, which has a role in directing the nation’s export promotion priorities and goals of CS; the last National Export Strategy was issued in October 2008 by the previous administration. A plan to carry out the NEI is due in September 2010. The NEI may have important implications for CS workforce planning, especially for the locations of staff and offices. CS has already developed several initiatives that include staff allocations and a list of “candidate” countries for new offices, which also appears in its budget request. These are preliminary plans for how CS will pursue activities in support of the NEI’s goals. However, CS did not have support for how the staffing allocations were developed and the countries were identified; we therefore are unable to determine how these decisions were made. CS has not conducted a systematic workforce needs analysis to determine the number or type of staff (FSOs, LES, or civil servants) needed or where those staff would be located domestically and overseas. Our prior human capital work has found that a fact-based, performance-oriented approach to human capital management is crucial for maximizing the value of human capital as well as managing risk. High-performing organizations identify their current and future human capital needs, including the appropriate number of employees, the key competencies and skills mix for mission accomplishment, and the appropriate deployment of staff across the organization and then create strategies for identifying and filling gaps. Valid and reliable data are critical to assessing an agency’s workforce requirements and heighten an agency’s ability to manage risk by allowing managers to spotlight areas for attention before crises develop and identify opportunities for enhancing agency results. Although the costs of collecting data may be significant, the costs of making decisions without the necessary information can be equally significant. In preparing its 2011 budget request, CS did not make staffing decisions based on an overall analysis of its needs, according to CS officials. Rather, it made decisions based on anecdotal information about the demand for services. Additionally, CS only recently began to systematically monitor how many vacancies it had and how many positions it might need to carry out its mission in the future. CS took approximately 3 months to provide us with data on staffing, in part because it lacks an automated personnel system and had to use data sources such as quarterly staffing reports. A Commerce official told us that the number of staff requested in the 2011 budget request was not based on vacancies. CS’s lack of a workforce needs analysis also has implications for staff placement. Whereas CS’s strategic focus has been on priority markets such as Brazil, China, and India as well as emerging markets in countries such as Azerbaijan and Qatar, staff placements may change under the NEI. As mentioned above, CS’s budget request includes a list of 22 countries that are “candidates currently being considered for new overseas offices,” where CS is considering placing staff. Commerce officials told us the list was not comprehensive, and the reasons for selecting those countries were not well documented. Among the “candidates” were offices that were closed under the Transformational Commercial Diplomacy Initiative such as Amsterdam, Netherlands; Barcelona, Spain; Kingston, Jamaica; Hamburg, Germany; Port of Spain, Trinidad and Tobago; and Lyon, France. Additionally, according to a State official, the NEI may target Colombia, Indonesia, Saudi Arabia, South Africa, Turkey, and Vietnam. Shifting CS’s focus could change the skills and experience its workforce needs to be effective in those markets. For example, even though CS is hiring 11 new FSOs in 2010, three important posts in Brasilia, Algiers, and Kuwait are being filled with limited non-career appointments, because of a shortage of experienced officers. None of the new candidates has the necessary skills, abilities, and knowledge to take one of these positions as a first post. CS has not followed workforce planning principles such as developing a plan to address its staffing gaps. Once an agency identifies its needs, it can develop strategies tailored to address gaps in the number, skills and competencies, and deployment of its workforce and the alignment of human capital approaches that will sustain the workforce in the future. Strategies include programs, policies, and practices that enable an agency to recruit, develop, and retain staff needed to achieve program goals. In addition, agencies need to understand the strengths and weaknesses of their current human capital program. According to a senior CS official, CS does not plan to fill all of its vacant positions. Rather, it will fill what it considers to be priority vacancies, including staffing new offices with seasoned officers. The official told us they have a reasonably good idea of where those priority locations are; one such possibility was Baku, Azerbaijan, where CS planned to open an office under the Transformational Commercial Diplomacy Initiative but did not due to a lack of funds. However, when asked if the 22 locations in the 2011 budget request were the priority locations, we were told they are possible locations but are not necessarily where people would be placed. It is also important for agencies to align their workforce to achieve their program goals. However, since the implementation of the Transformational Commercial Diplomacy Initiative, CS workforce strategies have not been based on a systematic analysis, but were ad hoc according to Commerce officials. Commerce officials told us that in some instances they asked FSOs to extend their overseas tours at their current locations as a cost-saving measure rather than being moved somewhere else, at substantial cost, based on a systematic determination of where they are most needed. CS also made decisions to leave some posts without an FSO. Instead, these posts were managed by LES—there were 25 such posts at the end of the last quarter of 2009. CS has not used its available staffing allocation model to make overseas staffing decisions since 2007, as part of its Transformational Commercial Diplomacy Initiative. Under the initiative, CS used its staffing allocation model to identify locations to close, open, or add staff. The model analyzed CS’s staff allocations using quantitative factors such as the macroeconomic strength of each country and other factors related to each market’s size and structure. It also integrated qualitative factors including foreign and trade policy priorities, levels of economic development, geographic coverage, and commercial environments. A similar model exists for the placement of staff domestically at USEACs. The model’s goal is to create a starting point for determining which U.S. locations have the highest export potential. Other factors affecting the placement of staff at USEACs include geographic coverage, policy initiatives, locations of commercial centers, and the skills and abilities of local staff. Besides the lack of a quality workforce plan, CS’s capacity to implement what CS officials said may be the biggest hiring effort in CS history is compromised because its human resources office is understaffed. CS’s Office of Foreign Service Human Resources, which manages the hiring process for FSOs, only had staff in 10 of its 19 positions in 2009. However, CS planned to increase the number of staff in 2010. According to a senior CS official, the office recently received permission to fill 5 of the 9 vacant existing positions. As of June 2010, 3 of the 5 positions were filled, although the office also just lost another staff person. CS needs a lead time of approximately 2 years to accomplish the major staffing increase requested in the 2011 budget request. It takes about 1 year to put together a list of qualified applicants, which involves advertising the position, identifying qualified candidates, interviewing candidates, selecting candidates, and making offers. Once a candidate is selected, he or she must obtain security and medical clearances. Commerce started the hiring process for 59 new FSOs in July 2009 and CS hopes to make offers to qualified candidates in summer 2010. Additionally, depending on the post, some positions require language training, which can take up to a year. Thus, it could take almost 2 years to hire, train, and field a new FSO. The process for hiring LES is much shorter, generally 6 weeks to 5 months, according to a senior CS official. CS is not responsible for hiring LES, who are hired overseas by State on CS’s behalf. For 2011, ITA developed a $321 million budget request for funding to support CS’s activities and hire new staff. The request was $63 million higher than its 2010 appropriation. We evaluated the methodology that ITA used to develop this request using best practices identified in the GAO Cost Estimating and Assessment Guide. See Appendix III for more information on our evaluation, including detailed descriptions of the best practices criteria. ITA’s methodology was sound in many respects, with good calculations for current costs such as overseas administrative fees, a good amount of detail for certain costs such as the purchase of vehicles, as well as error-checking processes that helped to ensure accuracy. However, the request also has weaknesses that could affect CS’s ability to meet its goals. Among the weaknesses we identified using the GAO Cost Estimating and Assessment Guide are (1) a lack of information regarding potential risks associated with the costs presented in the budget request, such as changes in exchange rates, which could lead to overly optimistic estimates and cost overruns, and (2) the lack of sufficient documentation, specifically back-up data, to clearly track costs over time, allow for the budget request to be validated, and enable new staff members to understand the request in the event of staff turnover. The methodology used to develop the budget request is sound in many respects, and CS took steps to ensure its accuracy. ITA budget analysts made many of their calculations in ways that are endorsed by the GAO Cost Estimating and Assessment Guide. They also broke the request down to an appropriate level of detail based on the standards in the guide, which both improves accuracy and facilitates good management. In an effort to determine the accuracy of the estimate, we reviewed ITA’s calculations for technical soundness and found them to be acceptable. ITA used rigorous budgeting practices to develop many parts of the request. For example, officials used relevant historical cost data and incorporated adjustments for inflation. They also followed best practices by varying their estimation methodologies as appropriate for different situations, which increased the request’s accuracy. For example, based on Office of Management and Budget (OMB) guidance, ITA estimated that the personnel they anticipated hiring in 2011 would come on board 3 months after the start of the fiscal year, on average, and the request reflected this hiring lapse. Additionally, ITA performed thorough error checking on its request, enabling CS management to make hiring and spending decisions with reasonable confidence that no costs had been forgotten or miscalculated. CS’s process in developing the request included multiple reviews to ensure accuracy, including internal reviews by various stakeholders within Commerce and external reviews by OMB. CS used the feedback from these reviews to update the request as needed. Also, CS routinely updated the costs in the budget with actual costs as they became available, enabling them to see if the estimate was on track. CS’s 2011 budget request also broke down costs to a level of detail that met the standards in the GAO Cost Estimating and Assessment Guide in most cases, ensuring that activities and costs were broken down into small pieces that management could individually plan for, schedule, and control. For example, salaries were calculated separately for several different types of employees rather than using one salary cost as the basis for all the calculations, and the cost of replacing 37 vehicles was identified separately. One of the benefits of including this level of detail is ensuring that cost elements are not omitted or double counted. ITA did not perform a risk analysis on its budget request for CS, which could lead to overly optimistic estimates of costs and cost overruns. We have found that most agency budget requests are overly optimistic, underestimating average costs. A risk analysis would help correct for this tendency by providing levels of confidence so that ITA would understand the probability of executing the budget successfully given the risks that were assessed. The risk analysis would identify the assumptions driving the estimate, and provide a range of costs that span the best and worst case scenarios. This would inform CS management of the probability that costs for salaries or other key items might exceed funding levels requested, and enable them to develop contingency plans for making spending and hiring decisions accordingly. For example, new staff will be located in different locations with vastly different costs. According to an ITA official, salaries for LES range from $12,000 in Vietnam to $100,000 in Frankfurt, Germany. However, the budget request did not provide a range of possible LES salary costs. Instead, ITA used an overly simplistic averaging approach to estimate LES costs, failing to give management perspective on how these costs might vary with different staff placement scenarios, changing exchange rates, etc. Although ITA is not required to perform a risk analysis by OMB’s annual budget development guidelines, it is a best practice according to the GAO Cost Estimating and Assessment Guide. Assumptions that drive the budget request were not fully explained, contributing to the inability to perform a risk analysis. Major assumptions in the 2011 budget request include salary estimates, annual salary increases, currency fluctuation, and travel costs, none of which were fully explained. Since ITA prepares its budget 2 years in advance, e.g., drafting its 2011 budget request in 2009, there is substantial uncertainty in these assumptions. Some assumptions were documented, such as using 2009 amounts with appropriate adjustments to estimate costs for 2011, but others were not explained. For example, travel costs were presented with a single number, without further explanation of how they arrived at this figure. Also, the reasoning behind estimating a particular exchange rate was not explained. Exchange rates can vary substantially. For example, over the course of 2009, the average monthly exchange rate of the dollar to the Brazilian real varied from a low of 1.8 to a high of 2.4, a difference of 30 percent. Likewise, the Mexican peso’s average monthly exchange rate with the dollar varied by 16 percent, the euro varied by 12 percent, the Japanese yen varied by 11 percent, and the Indian rupee varied by 7 percent. Without details and explanations, CS could not calculate risk distributions for assumptions like these, which would enable it to understand how much costs might vary if the situation changes. ITA’s budget request lacks sufficient supporting documentation, making it difficult for Congress or other parties to understand how the budget request was developed. For example, the budget request broke down changes in the budget for 2011, and these changes were added to 2010 costs to arrive at the total request. However, the budget request did not include the 2010 cost information. According to the GAO Cost Estimating and Assessment Guide, it is a best practice to provide sufficient detail so that the documentation allows for clear tracking of cost estimates over time. By documenting all steps in the development of its budget request, ITA would be able re-create its estimates in the event of budget staff turnover. This is particularly important since only a small number of people develop the budget. Additionally, thorough documentation of calculations and back-up data would allow the request to be checked and validated. Without this information, it is impossible for an outside reviewer to corroborate the information in the request. ITA briefed department-level officials in Commerce as well as the Office of Management and Budget on the 2011 budget request. However, we were unable to obtain any documentation of what was presented at the briefings, so we could not determine whether the briefings contained enough detail for management to understand the level of accuracy, completeness, and quality of the estimate, which is a best practice. In the wake of growing financial constraints and staffing declines, CS’s leadership faces significant challenges in its efforts to rebuild its workforce and play a major role in the President’s NEI. Additionally, depending on the direction set by the current administration, CS officials may need to make significant changes such as realigning CS’s workforce and offices. While the President’s plan is being finalized, the Assistant Secretary has opportunities to improve management controls over CS’s resources and proactively address the issues that led to their “crisis” situation in 2009. These opportunities include improving long-term financial and workforce information necessary to recognize significant changes affecting the organization; routinely reviewing operations to identify potential cost savings, such as administrative fees related to overseas posts; and recognizing risks and considering alternative responses to significant resource changes in a systematic manner so as to minimize actions such as freezing hiring, travel, and training that compromise CS’s ability to conduct its core business. CS currently lacks two key capabilities that would better position it to implement its 2011 budget and rapidly respond to any new priorities. The first is a workforce plan developed in accordance with workforce planning principles that is linked to the agency’s strategic goals and that would enable agency managers to regularly identify workforce gaps and develop a workforce strategy that fills them, including using or adopting its current staffing model. The implementation of such planning needs to be supported by adequate human capital management resources. The second capability is to estimate the budgetary costs of any changes in its operations according to best practices. This includes risk analyses to ensure that factors that could negatively impact its ability to fully fund its operations are understood and considered; contingency plans to address possible funding shortfalls; and documentation in support of the costs used to construct the estimate, so that future management and new budget staff can understand the estimate’s assumptions, costs, and contingencies. To better ensure CS effectively and efficiently uses its resources in support of its strategic goals and the President’s National Export Initiative, we are making the following three recommendations: The Secretary of Commerce should direct the Undersecretary for International Trade to strengthen management controls over CS’s financial and workforce improve workforce planning and better align CS’s workforce with its strategic goals and available resources on a routine basis, and improve cost estimating to better ensure that CS’s budget estimate includes sufficient resources to support its planned operations and addresses potential risks. In written comments on a draft of this report, Commerce concurred with our findings and recommendations. The Secretary of Commerce indicated that he has directed the International Trade Administration to use this report to develop stronger management controls, improve workforce planning, and improve cost estimates during the budget process. The Secretary of Commerce also indicated that ITA has been engaged in a vigorous strategic planning effort to align its focus, activities, and personnel to strengthen CS and support the President’s NEI, since January 2010. Additionally, Commerce provided technical comments to our draft, which we reviewed. The technical comments provided additional information or clarified CS activities or statements in the draft, and we made changes to reflect some of these points. We are sending this report to other interested Members of Congress and to the Secretaries of Agriculture, Commerce, and State. In addition, the report will be available free of charge at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In response to a Congressional mandate, GAO reviewed (1) how well the U.S. and Foreign Commercial Service (CS) managed its resources from 2004 to 2009, and (2) the completeness of CS’s workforce plans and the quality of its 2011 budget request. In addition, in appendix II, we provide information on how CS export promotion funding compares with the funding for Department of Agriculture (USDA) and Department of State (State) for analogous activities from 2004 to 2009. To determine the changes in CS’s workforce from 2004 to 2009, we analyzed data provided by CS on staffing losses and gains by type of position, identifying the number of civil servants, political hires, Foreign Service Officers (FSO), and locally employed staff (LES) during that 5-year period of time. We also identified staff losses and gains by location at foreign posts, U.S. Export Assistance Centers (USEAC), and CS headquarters in Washington, D.C. We traced CS data back to source documents where possible and found them sufficiently reliable to report on the loss of staff throughout CS. We used the staffing data provided by CS to identify where FSOs were serving in domestic positions. To determine how much funding CS had available, what the major cost components of its budget were, how administrative costs changed over time, and what impact changing costs had on CS, we reviewed CS’s appropriations and full-time equivalent (FTE) levels from 2004 through 2009. We used internal CS budget documents, ITA Congressional Budget Justifications, President’s budgets, and appropriations bills to gather and corroborate budget data, and based on consistency among these documents, we found these data to be sufficiently reliable to report on CS’s budget history. We also interviewed CS officials responsible for managing the budget. We analyzed the foreign posts’ and USEACs’ 2009 written comments to determine weaknesses and threats that were commonly reported. To determine the completeness of CS’s plans to rebuild its workforce, we reviewed CS’s 2010 and 2011 budget requests to ascertain whether the staffing increases CS requested were sufficient to cover its staffing changes from 2004 to 2009. We interviewed CS officials who were involved in developing the requests. We also interviewed CS officials regarding their process for hiring and placing new staff overseas, and reviewed CS policy requiring FSOs to serve in domestic positions. To determine whether CS had conducted workforce planning, we evaluated its efforts using GAO’s five key principles of effective strategic workforce planning. We reviewed CS’s previous workforce planning efforts under its 2006 Transformational Commercial Diplomacy Initiative including CS’s use of its Overseas and Domestic Resource Allocation Models (ORAM and DRAM), and its cost benefit model. We interviewed CS officials regarding how workforce planning decisions were made since the ORAM and DRAM models were last used. We also interviewed a senior CS official about the human resource office’s potential to handle the projected large increase in FSOs that is contained in CS’s budget requests for 2010 and 2011. To determine the quality of the International Trade Administration’s (ITA) 2011 budget request for CS, we determined the extent to which ITA followed the best practices outlined in the GAO Cost Estimating and Assessment Guide. The guide identifies 12 practices that are the basis for effective cost estimation, including cost estimation for annual budget requests. It associates these practices with four characteristics: accurate, well documented, comprehensive, and credible. The Office of Management and Budget (OMB) endorsed this guidance as being sufficient for meeting most cost estimating requirements, including for budget formulation. If followed correctly, these practices should result in reliable and valid budgets that (a) can be easily and clearly traced, replicated, and updated, and (b) enable managers to make informed decisions. In performing this analysis, we examined the 2011 budget request and supporting documentation provided by ITA, and we conducted interviews with ITA budget staff. After conducting this assessment, we identified major strengths and weaknesses of the 2011 budget request. To describe the level of funding CS received compared with State and USDA for analogous export promotion activities in appendix II, we worked with State and USDA to determine which of their programs and activities were analogous to CS’s export portfolio. We jointly agreed on what elements of their budget could be attributed to export promotion. We focused on (1) marketing and market research, (2) technical assistance and training for exporting businesses, and (3) advocacy that benefits individual companies. We reviewed the President’s budget requests and agency budget justifications for CS, which are included in the budgets of the Department of Commerce’s ITA, as well as the budgets for USDA’s Foreign Agricultural Service and State’s Office of Commercial and Business Affairs to identify those programs and activities in their budgets that supported those functions in order to develop the comparison. We determined that USDA’s budget summaries were reliable by reviewing financial audits for the 6-year time period of our review. Audits for 2 of the years found that the financial statements fairly presented USDA’s finances with some adjustments needed to internal controls, and the audits for the other 4 years found that the financial statements fairly presented USDA’s finances without caveat. There is one limitation to USDA’s budget summaries, which is that salaries and expenses are listed by function rather than by program. To address the limitation, we included these amounts, because it seemed sufficiently clear which functions related to export promotion, although the labels were different from the program names. State was not able to provide sufficient budget data for 2004 to 2008, so we only reported on 2009. We conducted this performance audit from September 2009 to August 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Departments of Agriculture (USDA), Commerce, and State are the three main U.S. agencies tasked with promoting exports through advocacy for individual companies, marketing, and technical assistance and training. In 2009, USDA’s Foreign Agricultural Service (FAS) had 97 offices in 75 countries, Commerce’s U.S. and Foreign Commercial Service (CS) had 127 offices in 76 countries, and State had 45 offices in 45 countries. Commerce, State, and USDA have different funding levels for export promotion. Additionally, USDA’s export promotion model is different from the one employed by Commerce and State. In 2009, Commerce received $238 million, and USDA received $365 million for export promotion. State estimated it spent $17 million in support of export promotion for 2009. See figure 1 for funding levels for USDA and Commerce from 2004 through 2009. Funding for State is not included in the figure, as State was unable to determine the personnel costs associated for FSOs and LES who supported its export promotion efforts from 2004 through 2008. State estimated that FSO costs totaled $15 million in 2009. In addition, State funded small export promotion projects at posts and had staff in Washington, D.C., bringing the total State estimated it spent in 2009 to $17 million, excluding LES. If LES costs were included, the level of funding State spent on export promotion would be higher. While all three agencies conduct export promotion activities, the amount of funding the agencies receive cannot be directly compared, since USDA uses a different approach to promote exports than Commerce and State. Also, while Commerce and State use the same model, they operate in different locations and have different numbers of posts. Commerce’s model focuses on direct services to exporters, especially small- and medium-sized businesses. CS provides counseling and market intelligence, matchmaking (connecting exporters to customers), and advocacy on behalf of individual businesses. Its overseas posts coordinate with U.S. Export Assistance Centers throughout the United States. USDA’s FAS primarily promotes the export of commodities indirectly through funding for external programs, unlike CS which provides services directly to exporters. Additionally, the majority of FAS’s activities are focused on promoting U.S. agricultural commodities in general, whereas Commerce’s focus is on assisting individual businesses seeking to export. FAS provides funding to agricultural trade associations, state and regional trade groups, small businesses, and cooperatives that plan and carry out export promotion activities. The two largest programs are the following: The Market Access Program funds consumer promotions, market research, and technical capacity building to develop, maintain, and expand foreign markets for U.S. agricultural goods, including branded goods and generic commodities. In 2009, FAS allocated $200 million to this activity, which represented 55 percent of its total export promotion budget. The Foreign Market Development Program focuses on long-term development of foreign markets for generic U.S. agricultural commodities. In 2009, FAS allocated $34 million to this activity, which represented 9 percent of its total export promotion budget. These programs are supplemented by smaller funding programs. One program helps trade organizations provide sample agricultural products, another provides funding to help overcome technical barriers to exporting, and a third funds development of exports in emerging markets. Additionally, FAS staff overseas provide market intelligence for U.S. firms, and work on export promotion activities including market research and trade shows. State supports U.S. exporters in locations where CS does not have a presence. Commerce and State signed a Memorandum of Understanding in January 2009, formalizing this representation. Prior to that, State acted on behalf of CS informally. Under the memorandum, State staff at these partnership posts provide services developed by CS, including matching businesses with potential customers and market research. Providing CS services at partnership posts is not a full-time job for State FSOs and LES. In a survey of the amount of time partnership post staff spent on export promotion efforts in 2009, FSOs indicated they spent about one-quarter of their time on export promotion, and LES at these posts spent about half of their time on it. State also provides approximately $340,000 to $400,000 per year in financial support for posts’ business promotion and commercial outreach activities through a Business Facilitation Incentive Fund. To analyze the International Trade Association’s (ITA) 2011 budget request for the U.S. and Foreign Commercial Service (CS), we determined the extent to which ITA followed the best practices outlined in the GAO Cost Estimating and Assessment Guide. The guide identifies 12 practices that are the basis for effective cost estimation, including cost estimation for annual budget requests. It associates these practices with four characteristics: accurate, well documented, comprehensive, and credible. The Office of Management and Budget (OMB) endorsed this guidance as being sufficient for meeting most cost-estimating requirements, including for budget formulation. If followed correctly, these practices should result in reliable and valid budgets that (a) can be easily and clearly traced, replicated, and updated; and (b) enable managers to make informed decisions. As table 7 illustrates, we found that CS’s budget development methods substantially met two, partially met one, and minimally met one of these four practices. After conducting this assessment, we identified major strengths and weaknesses of the 2011 budget request. The following explains the definitions we used in assessing ITA’s methods for estimating costs in its annual budget submission: Met—ITA provided complete evidence that satisfies the entire criterion. Substantially met—ITA provided evidence that satisfies a large portion of the criterion. Partially met—ITA provided evidence that satisfies about half of the criterion. Minimally met—ITA provided evidence that satisfies a small portion of the criterion. Not met—ITA provided no evidence that satisfies any part of the criterion. The sections that follow highlight the key findings of our assessment. Best practices for comprehensiveness include an estimating plan that includes sufficient resources, an estimating approach with standard cost elements broken down to sufficient detail, and clear identification of ground rules and assumptions. Estimating plan. The budgeting team had the proper number and mix of resources to develop the budget request, and team members were from a centralized office. The team leader had appropriate experience and qualifications, although CS did not provide us with enough information to determine whether other team members were qualified. Estimating approach. The budget used a standard cost element structure that defined all cost elements and addressed relevant costs. CS broke down pertinent costs to an acceptable level of detail. CS properly separated contractor costs from government costs, although it detailed contractor costs more explicitly for costs that were new in 2011 than for ongoing costs. Ground rules and assumptions. CS relied on ground rules and assumptions, such as using 2010 amounts with adjustments appropriate for each cost element to estimate costs for 2011, but assumptions were not fully documented. CS did not determine risk distributions for all assumptions, which would enable it to perform an uncertainty analysis for key cost elements. Best practices for being well documented include clearly defining the estimate’s purpose, defining key characteristics of the budget including primary cost drivers and systems for updating the budget, clearly identifying ground rules and assumptions, obtaining data properly, documenting the estimate so that corroborating data and calculations can be identified, and presenting clear and sufficient information to management for approval. Purpose of estimate. CS clearly defined the purpose and scope of its budget request, and all applicable costs were estimated. Budget characteristics. The number of staff is the primary driver of the cost of the 2011 budget request. CS received a $5.2 million total increase in 2010 for increasing CS’s presence in emerging and developing economies. Program staff reviewed the budget and sent corrections on inaccurate items to the budget team, although there is no ongoing system for updating the budgeting team, and CS did not provide us with information on whether there was one centralized place where budget update information was stored. Ground rules and assumptions. See above under the discussion of comprehensiveness. The best practice of setting ground rules and assumptions is relevant to both being well documented and to comprehensiveness. Data. Consistent with best practices, CS used historical data to estimate key operational costs. CS performed cross-checks on its data by having program staff verify assumptions in new estimates against historical data, and developed a computer program to check for common errors. However, salaries for locally employed staff (LES) vary widely, which causes uncertainty in the cost estimate. Documentation. In some cases, we required the guidance of CS budget analysts to identify backup support because the documentation was insufficient to allow someone unfamiliar with the budget to locate detailed corroborating data. The budget documentation did not provide a step-by- step description of the budgeting process, methods, or sources. ITA staff said that documenting and chronicling the information that was used to create it would be a best practice they aspire to but is not something they currently do. Presenting to management for approval. CS presented the budget to department-level officials in the Department of Commerce as well as to OMB and the House Appropriations Committee. However, we were unable to obtain any documentation of what was presented at the briefings, so we could not determine whether it contained enough detail for management to understand the level of accuracy, completeness, and quality of the estimate. Best practices for accuracy include appropriate methodology for developing the point estimate, and updating the estimate to reflect actual costs and changes. Point estimate. CS officials used relevant historical cost data and considered adjustments for general inflation when estimating costs. They varied their estimation methodologies as appropriate for different situations. However, although they were aware that salaries were their largest cost driver, they used an overly simplistic averaging approach to reflect likely new staff salaries at hiring. Additionally, since CS did not perform a risk analysis, it is not possible to know whether its point estimate was the most likely actual reflection of costs, or was overly optimistic or conservative. We found no mathematical mistakes in the request, and CS validated the request by looking for errors. CS also cross- checked the budget estimates with program staff and with budget staff at both Commerce and OMB. Update with actual costs and changes. ITA updated the request based on feedback from program staff reviews. Additionally, actual costs were compared with estimates on a monthly basis. However, CS did not share changes to the cost estimate with us, so we were unable to assess whether changes were properly updated. Best practices for credibility include conducting a sensitivity analysis and conducting a risk analysis. Sensitivity analysis. CS did not perform a sensitivity analysis on each of the major assumptions to determine how outcomes would vary if they changed. Major assumptions included salary estimates, annual salary increases, impact of currency fluctuation, and travel costs. Risk analysis. CS did not perform a risk analysis to quantify the overall risk associated with changes to the assumptions that drive its budget. A risk analysis would help provide CS managers with information to determine the probability that costs for key operations, such as salaries, may exceed funding levels requested in the budget, so that they could make spending and hiring decisions accordingly. In addition to the individual named above, Adam Cowles, Assistant Director; Gezahegne Bekele; Elizabeth Bowditch; Karen Deans; Martin DeAlteriis; Julie Hirshen; Grace Lui; Karen Richey; Meredith H. Trauner; and Amanda Weldon made key contributions to this report.
Since the recent recession, policymakers have emphasized the role exports can play in strengthening the U.S. economy and in creating higher paying jobs. In March 2010 the President signed an Executive Order creating the National Export Initiative (NEI), with a goal of doubling U.S. exports in 5 years. However, since 2004 the workforce of the U.S. and Foreign Commercial Service (CS) has shrunk, calling into question the ability of this key agency to increase its activities to assist U.S. businesses with their exports. In response to a conference committee mandate, GAO reviewed (1) how well CS managed its resources from 2004 to 2009, and (2) the completeness of CS's workforce plans and the quality of its fiscal year 2011 budget request. GAO analyzed data from the Departments of Agriculture, Commerce, and State; reviewed agency documents; and interviewed agency officials. CS had management control weaknesses over its resources from 2004 to 2009. During this period, CS's budgets remained essentially flat as per capita personnel costs and administrative costs increased. However, CS leadership did not recognize the long-term implications of these changes because it lacked key financial and workforce information and risk analysis necessary for good management control. CS continued to pay fees associated with positions it maintained in U.S. embassies that were vacant but not officially eliminated. As CS's financial constraints grew, officials delayed their impact by using a variety of financial management practices. For example, the International Trade Administration (ITA), CS's parent agency, attributed some of CS's centralized costs to other units. However, as the availability of offsetting funds declined and costs continued growing, CS leadership failed to recognize the risks from these changes in accordance with good management controls, and reached a "crisis" situation in 2009. Officials froze hiring, travel, training, and supplies, compromising CS's ability to conduct its core business. CS's workforce declined by about 14 percent from its peak level in 2004 through attrition--affecting the mix and distribution of personnel. CS intends to rebuild its workforce but lacks key planning elements for doing so, and its budget request has weaknesses that could affect its ability to meet its goals. CS will have a central role in implementing the NEI. The President's 2011 budget requested $321 million for CS, $63 million more than its 2010 appropriation. The budget would fund a major staff increase. CS is allocating $5.2 million of its 2010 appropriation to begin recruiting new staff. However, as new executive-level leadership was arriving, GAO found that CS lacked key planning elements, including a clear sense of strategic direction and an analysis to determine its workforce needs. Also, it had not updated its workforce plans to address staffing gaps since fiscal year 2007. Adding more staff could be delayed because CS's human resources office is itself understaffed and because CS requires up to 2 years to hire and train new Foreign Service Officers. GAO also found that the 2011 budget request, though sound in many respects, has weaknesses; it lacks some documentation, and it lacks risk analysis and contingency plans for highly variable program costs, which could lead to cost overruns. GAO recommends to the Secretary of Commerce that CS (1) strengthen management controls, (2) improve workforce planning, and (3) improve cost estimating related to CS's budget estimate. Commerce agreed with our findings and recommendations.
Distinctions between cruise missiles and UAVs are becoming blurred as the militaries of many nations, in particular the United States, attach missiles to traditional reconnaissance UAVs and develop UAVs dedicated to combat missions. A UAV, a pilotless vehicle that operates like an airplane, can be used for a variety of military and commercial purposes. UAVs are available in a variety of sizes and shapes, propeller-driven or jet propelled, and can be straight-wing aircraft or have tilt-rotors like helicopters. They can be as small as a model aircraft or as large as a U-2 manned reconnaissance aircraft. A cruise missile is an unmanned aerial vehicle designed for one-time use, which travels through the air like an airplane before delivering its payload. A cruise missile consists of four major components: a propulsion system, a guidance and control system, an airframe, and a payload. The technology for the engine, the autopilot, and the airframe could be similar for both cruise missiles and UAVs, according to a 2000 U.S. government study of cruise missiles. Cruise missiles provide a number of military capabilities. For example, they present significant challenges for air and missile defenses. Cruise missiles can fly at low altitudes to stay below radar and, in some cases, hide behind terrain features. Newer missiles are incorporating stealth features to make them less visible to radars and infrared detectors. Furthermore, land-attack cruise missiles may fly circuitous routes to get to their targets, thereby avoiding radar and air defense installations. U.S. policy on the proliferation of cruise missiles and UAVs is expressed in U.S. commitments to the MTCR and Wassenaar Arrangement. These multilateral export control regimes are voluntary, nonbinding arrangements among like-minded supplier countries that aim to restrict trade in sensitive technologies. Regime members agree to restrict such trade through their national laws and regulations, which set up systems to license the exports of sensitive items. The four principal regimes are the MTCR; the Wassenaar Arrangement, which focuses on trade in conventional weapons and related items with both civilian and military (dual-use) applications; the Australia Group, which focuses on chemical and biological technologies; and the Nuclear Suppliers Group, which focuses on nuclear technologies. The United States is a member of all four regimes. Regime members conduct a number of activities in support of the regimes, including (1) sharing information about each others’ export licensing decisions, including certain export denials and, in some cases, approvals and (2) adopting common export control practices and control lists of sensitive equipment and technology into national laws or regulations. Exports of commercially supplied American-made cruise missiles, military UAVs, and related technology are transferred pursuant to the Arms Export Control Act, as amended, and the International Trafficking in Arms Regulations, implemented by State. Government-to-government transfers are made pursuant to the Foreign Assistance Act of 1961, as amended, and are subject to DOD guidance. Exports of dual-use technologies related to cruise missiles and UAVs are transferred pursuant to the Export Administration Act of 1979, as amended, and the Export Administration Regulations, implemented by Commerce. The Arms Export Control Act, as amended in 1996, requires the President to establish a program for end-use monitoring of defense articles and services sold or exported under the provisions of the act and the Foreign Assistance Act. This requirement states that, to the extent practicable, end-use monitoring programs should provide reasonable assurance that recipients comply with the requirements imposed by the U.S. government on the use, transfer, and security of defense articles and services. In addition, monitoring programs, to the extent practicable, are to provide assurances that defense articles and services are used for the purposes for which they are provided. The Export Administration Act, as amended, provides the Department of Commerce with the authority to enforce dual- use controls. Under the act, Commerce is authorized to conduct PSV visits outside the United States of dual-use exports. Although cruise missiles and UAVs provide important capabilities for the United States and its friends and allies, in the hands of U.S. adversaries they pose substantial threats to U.S. interests. First, anti-ship cruise missiles threaten U.S. naval forces deployed globally. We reported in 2000 that the next generation of anti-ship cruise missiles—most of which are now expected to be fielded by 2007—will be equipped with advanced target seekers and stealthy design. These features will make them more difficult to detect and defeat. At least 70 nations possess some type of cruise missile, mostly short-range, anti-ship missiles armed with conventional, high-explosive warheads, according to a U.S. government study. Countries that export cruise missiles currently include China, France, Germany, Israel, Italy, Norway, Russia, Sweden, United Kingdom, and the United States. China and Russia have sold cruise missiles to Iran, Iraq, Libya, North Korea, and Syria. Nations that manufacture but do not yet export cruise missiles currently include Brazil, India, Iran, Iraq, North Korea, South Africa, and Taiwan. None of these nonexporting manufacturing countries is a member of the Wassenaar Arrangement, and only Brazil and South Africa are in the MTCR. Second, land-attack cruise missiles have a potential in the long-term to threaten the continental United States and U.S. forces deployed overseas. Various government and academic studies have raised concerns that the wide availability of commercial items, such as global positioning system receivers and lightweight engines, allows both countries and nonstate actors to enhance the accuracy of their systems, upgrade to greater range or payload capabilities, and convert certain anti-ship cruise missiles into land-attack cruise missiles. Although not all cruise missiles can be modified into land-attack cruise missiles because of technical barriers, specific cruise missiles can and have been. For example, a 1999 study outlined how the Chinese Silkworm anti-ship cruise missile had been converted into a land-attack cruise missile. Furthermore, the Iraq Survey Group reported in October 2003 that it had discovered 10 Silkworm anti- ship cruise missiles modified to become land-attack cruise missiles and that Iraq had fired 2 of these missiles at Kuwait. According to an unclassified national intelligence estimate, several countries are technically capable of developing a missile launch mechanism to station on forward-based ships or other platforms to launch land-attack cruise missiles against the United States. Finally, UAVs represent an inexpensive means of launching chemical and biological attacks against the United States and allied forces and territory. For example, the U.S. government reported its concern over this threat in various meetings and studies. The Acting Deputy Assistant Secretary of State for Nonproliferation testified in June 2002 that UAVs are potential delivery systems for WMD, and are ideally suited for the delivery of chemical and biological weapons given their ability to disseminate aerosols in appropriate locations at appropriate altitudes. He added that, although the primary concern has been that nation-states would use UAVs to launch WMD attacks, there is potential for terrorist groups to produce or acquire small UAVs and use them for chemical or biological weapons delivery. The U.S. government generally uses two key nonproliferation tools—- multilateral export control regimes and national export controls—to address cruise missile and UAV proliferation, but both tools have limitations. The United States and other governments have traditionally used multilateral export control regimes, principally the MTCR, to address missile proliferation. However, despite successes in strengthening controls, the growing capability of countries of concern to develop and trade technologies used for WMD limits the regime’s ability to impede proliferation. For example, between 1997 and 2002, the United States and other governments successfully revised the MTCR’s control lists of sensitive missile-related equipment and technology to include six of eight U.S.-proposed items related to cruise missile and UAV technology. Adding items to the control lists commits regime members to provide greater scrutiny when deciding whether to license the items for export. Despite the efforts of these regimes, nonmembers such as China and Israel continue to acquire, develop, and export cruise missile or UAV technology. The growing capability of nonmember supplier countries to develop technologies that could be used for WMD and trade them with other countries of proliferation concern undermines the regimes’ ability to prevent proliferation. In October 2002, we reported on other limitations that impede the ability of the multilateral export control regimes, including the MTCR and Wassenaar Arrangement, to achieve their nonproliferation goals. We found that MTCR members may not share complete and timely information, such as members’ denied export licenses, in part because the regime lacks an electronic data system to send and retrieve such information. The Wassenaar Arrangement members share export license approval information but collect and aggregate it to a degree that it cannot be used constructively. Both MTCR and the Wassenaar Arrangement use a consensus-based process that makes decision-making difficult. The regimes also lack a means to enforce compliance with members’ political commitments to regime principles. We recommended that the Secretary of State establish a strategy to work with other regime members to enhance the effectiveness of the regimes by implementing a number of steps, including (1) adopting an automated information-sharing system in MTCR to facilitate more timely information exchanges, (2) sharing greater and more detailed information on approved exports of sensitive transfers to nonmember countries, (3) assessing alternative processes for reaching decisions, and (4) evaluating means for encouraging greater adherence to regime commitments. However, State has not been responsive in implementing the recommendation to establish a strategy to enhance the effectiveness of the regimes. State officials said that the recommendation is under consideration in a review by the National Security Council that has been ongoing for over a year. The U.S. government uses its national export control authorities to address missile proliferation but finds it difficult to identify and track commercially available items not covered by control lists. For example, Bureau of Immigration and Customs Enforcement agents upon inspecting an item to be exported might identify that the item is a circuit board, but not that it is part of a guidance system and that the guidance system is intended for a cruise missile. Moreover, a gap in the catch-all provision of U.S. export control regulations could allow subnational actors to acquire American cruise missile or UAV technology for missile proliferation or terrorist purposes without violating U.S. export control laws or regulations. This gap in U.S. export control authority enabled American companies to legally export dual-use items to a New Zealand resident who bought the items to show how a terrorist could legally build a cruise missile. The gap results from current regulations that restrict the sale of certain dual-use items to national missile proliferation projects and countries of concern, but not to nonstate actors such as certain terrorist organizations or individuals. The United States has other nonproliferation tools to address cruise missile and UAV proliferation—diplomacy, sanctions, and interdiction of illicit shipments of items—but these tools have had unclear results or have been little used. End-use monitoring refers to the procedures used to verify that foreign recipients of controlled U.S. exports use such items according to U.S. terms and conditions of transfer. A post-shipment verification visit is a key end-use monitoring tool for U.S. agencies to confirm that authorized recipients of U.S. technology both received transferred items and used them in accordance with conditions of the transfer. State is responsible for conducting PSVs on direct commercial sales of cruise missiles, UAVs, and related technology. We found that State did not use PSVs to assess compliance with cruise missile or UAV licenses having conditions limiting how the item may be used. These licenses included items deemed significant by State regulations. Based on State licensing data, we identified 786 licenses for cruise missiles, UAVs, or related items from fiscal years 1998 through 2002. Of these, 480 (61 percent) were licenses with conditions, while 306 (39 percent) were licenses without conditions. We found that State did not conduct PSVs for any of the 480 licenses with conditions and conducted PSVs on 4 of 306 licenses approved without conditions. A State licensing official stated that few post-shipment checks have been conducted for cruise missiles, UAVs, and related items because many are destined for well-known end users in friendly countries. However, over fiscal years 1998 through 2002, 129 of the 786 licenses authorized the transfer of cruise missile and UAV-related items to countries such as Egypt, Israel, and India. These countries are not MTCR members, which indicates that they might pose a higher risk of diversion. In commenting on a draft of our report, State emphasized the importance of pre-license checks in verifying controls over the end user and end use of exported items and said that we did not include such checks in our analysis. We therefore reviewed the original 786 cruise missile and UAV licenses to determine how many had received pre-license checks, a possible mitigating factor reducing the need to conduct a PSV. We found that only 6 of the 786 licenses from fiscal years 1998 through 2002 that State provided us had been selected for pre-license checks. Defense is responsible for monitoring transfers of cruise missiles, UAVs, and related technology provided under government-to-government agreements through the Foreign Military Sales program. Defense’s end-use monitoring program has conducted no end-use checks related to cruise missile or UAV transfers, according to the program director. From fiscal years 1998 through 2002, DOD approved 37 agreements for the transfer of more than 500 cruise missiles and related items, as well as one transfer of UAV training software. The agreements authorized the transfer of Tomahawk land-attack cruise missiles, Standoff land-attack missiles, and Harpoon anti-ship cruise missiles, as well as supporting equipment such as launch tubes, training missiles, and spare parts. Approximately 30 percent of cruise missile transfers were destined for non-MTCR countries. Despite the 1996 legal requirement to create an end-use monitoring program, Defense’s Golden Sentry monitoring program is not yet fully implemented. DOD issued program guidance in December 2002 that identified the specific responsibilities for new end-use monitoring activities. In addition, as of February 2004, DOD was conducting visits to Foreign Military Sales recipient countries to determine the level of monitoring needed and was identifying weapons and technologies that may require more stringent end- use monitoring. The program director stated that he is considering adding cruise missiles and UAVs to a list of weapon systems that receive more comprehensive monitoring. The Commerce Department is responsible for conducting PSVs on exports of dual-use technology that might have military applications for cruise missiles and UAVs. Based on Commerce licensing data, we found that Commerce issued 2,490 dual-use licenses between fiscal years 1998 and 2002 for items that could be useful in developing cruise missiles or UAVs. These licenses were for items to countries including India, Israel, Poland, Switzerland, Turkey, and the United Arab Emirates. Of these, Commerce selected 2 percent of the licenses, or 52 cases, for a PSV visit and completed visits for about 1 percent of the licenses, or 29 cases. Other supplier countries place conditions on cruise missile and UAV- related transfers, but few reported conducting end-use monitoring once they exported the items. While national export laws authorize end-use monitoring, none of the foreign government officials reported to us any PSV visits for cruise missile or UAV-related items. Government officials in France, Italy, and the United Kingdom stated that their respective governments generally do not verify conditions on cruise missile and UAV transfers and conduct few PSV visits of such exports. The South African government was the only additional supplier country responding to a written request for information that reported it regularly requires and conducts PSVs on cruise missile and UAV transfers. The continued proliferation of cruise missiles and UAVs poses a growing threat to the United States, its forces overseas, and its allies. Most countries already possess cruise missiles, UAVs, or related technology, and many are expected to develop or obtain more sophisticated systems in the future. The dual-use nature of many of the components of cruise missiles and UAVs also raises the prospect that terrorists could develop rudimentary systems that could pose additional security threats to the United States. Because this technology is widely available throughout the world, the United States works in concert with other countries through multilateral export control regimes whose limited effectiveness could be enhanced by adopting recommendations we have made in previous reports. U.S. export controls may not be sufficient to prevent cruise missile and UAV proliferation and to ensure compliance with license conditions. Because some key dual-use components can be acquired without an export license, it is difficult for the export control system to limit or track their use. Moreover, current U.S. export controls may not prevent proliferation by nonstate actors, such as certain terrorists, who operate in countries that are not currently restricted under missile proliferation regulations. Furthermore, the U.S. government seldom uses its end-use monitoring programs to verify compliance with the conditions placed on items that could be used to develop cruise missiles or UAVs. As a result, the U.S. government does not have sufficient information to know whether recipients of these exports are effectively safeguarding equipment and technology and, thus, protecting U.S. national security and nonproliferation interests. The challenges to U.S. nonproliferation efforts in this area, coupled with the absence of end-use monitoring programs by several foreign governments for their exports of cruise missiles or UAVs, raise questions about how nonproliferation tools are keeping pace with the changing threat. We recommended that the Secretary of Commerce assess and report to Congress on the adequacy of the export control regulations’ catch-all provision to address missile proliferation by nonstate actors and on ways the provision might be modified. We also recommended that the Secretaries of State, Commerce, and Defense each complete a comprehensive assessment of the nature and extent of compliance with license conditions on cruise missiles, UAVs, and related dual-use technology. As part of the assessment, the departments should also conduct additional PSV visits on a sample of cruise missile and UAV licenses. This assessment would allow the departments to gain critical information that would allow them to better balance potential proliferation risks of various technologies with available resources for conducting future PSV visits. Commerce and Defense partially concurred with our recommendations, which we modified to address their comments. State disagreed with the need to conduct a comprehensive assessment of the nature and extent of compliance with license conditions for cruise missile and UAV technology transfers. However, State said that it would consider conducting more PSVs on such technology transfers as it improves its monitoring program. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For future contacts regarding this testimony, please contact Joseph Christoff at (202) 512-8979. David C. Maurer, Jeffrey D. Phillips, Claude Adrien, W. William Russell IV, Lynn Cothern, Stephen M. Lord, and Richard Seldin made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Cruise missiles and unmanned aerial vehicles (UAV) pose a growing threat to U.S. national security interests as accurate, inexpensive delivery systems for conventional, chemical, and biological weapons. GAO assessed (1) the tools the U.S. and foreign governments use to address proliferation risks posed by the sale of these items and (2) efforts to verify the end use of exported cruise missiles, UAVs, and related technology. The growing threat to U.S. national security of cruise missile and UAV proliferation is challenging the tools the United States has traditionally used. Multilateral export control regimes have expanded their lists of controlled technologies that include cruise missile and UAV items, but key countries of concern are not members. U.S. export control authorities find it increasingly difficult to limit or track unlisted dual-use items that can be acquired without an export license. Moreover, a gap in U.S. export control authority enables American companies to export certain dual-use items to recipients that are not associated with missile projects or countries listed in the regulations, even if the exporter knows the items might be used to develop cruise missiles or UAVs. American companies have in fact legally exported dual-use items with no U.S. government review to a New Zealand resident who bought the items to build a cruise missile. The U.S. government seldom uses its end-use monitoring programs to verify compliance with conditions placed on the use of cruise missile, UAV, or related technology exports. For example, State officials do not monitor exports to verify compliance with license conditions on missiles or other items, despite legal and regulatory requirements to do so. Defense has not used its end-use monitoring program initiated in 2002 to check the compliance of users of more than 500 cruise missiles exported between fiscal years 1998 and 2002. Commerce conducted visits to assess the end use of items for about 1 percent of the 2,490 missile-related licenses we reviewed. Thus, the U.S. government cannot be confident that recipients are effectively safeguarding equipment in ways that protect U.S. national security and nonproliferation interests.
Pain, which affects millions of Americans, can be characterized in terms of intensity—mild to severe—and duration—acute or chronic. While the appropriate medical treatment of pain varies according to these two dimensions, opioid analgesics can provide pain relief for some patients. These prescription pain relievers can be made in either immediate- release or extended-release formulations. Immediate-release pain relievers work for shorter periods of time, while extended-release pain relievers are designed to provide a longer period of drug release so that they can be taken less frequently. Prescription pain relievers are sometimes used in a manner other than as prescribed—that is, they are abused and misused. While federal agencies’ definitions of abuse and misuse vary, they generally incorporate three types of inappropriate use. First, some individuals use prescription pain relievers with the intent to get high, whether or not they were prescribed the drugs. Second, some individuals use prescription pain relievers that they were not prescribed to relieve pain; for example, by borrowing a pill from a friend in order to treat a headache. Third, some individuals, while seeking pain relief, incorrectly use prescription pain relievers that were prescribed to them, such as by taking more than prescribed. Prescription pain relievers have serious risks when they are abused and misused. Abuse and misuse of prescription pain relievers can lead to addiction and severe respiratory depression, which can cause death. Depending on the amount taken, even a single dose could cause death if taken by an individual who does not regularly use such pain relievers and whose body is not accustomed to their effects. Also, using alcohol or other drugs with prescription pain relievers can increase the risk of dangerous side effects, including death. Federal agencies use both regulatory and programmatic approaches in their efforts to prevent prescription pain reliever abuse and misuse. Because of their potential for abuse, prescription pain relievers are regulated under the Controlled Substances Act. Prescribers, such as physicians, physician assistants, nurse practitioners, and dentists, must register with DEA to prescribe drugs regulated under the act, and prescribers serve a key role in reducing prescription drug abuse and misuse. However, federal agencies have noted gaps in prescriber education about issues related to prescription pain reliever abuse and misuse, including that most prescribers receive little training on the importance of appropriate prescribing and dispensing of prescription pain relievers, on how to recognize substance abuse in their patients, or on treating pain. A recent study on pain education in medical schools found that such education is limited, variable, and often fragmentary. Further, given the recent introduction of new pain relievers to the U.S. market and advances in pain management, prescribers who completed their medical training in prior years may not have received training in prescribing certain types of pain relievers, such as extended-release or long-acting formulations. While continuing education of current prescribers could help address this issue, according to an American Medical Association publication, as of September 2011 medical boards in only nine states had a continuing medical education (CME) requirement related to education on controlled substance prescribing or pain management for certain prescribers. A representative of the American Pain Foundation told us that the organization frequently receives reports from patients that, in some communities, prescribers have stopped prescribing prescription pain relievers because of a lack of knowledge about how to safely prescribe them. Federal public education efforts seek to educate patients and the general public of all ages about the appropriate use, secure storage, and disposal of prescription drugs, as well as the risks associated with prescription drug abuse and misuse (see app. IV for descriptions of federal efforts to educate the general public about prescription pain reliever abuse and misuse). We have previously identified certain key practices that are important for the development of educational outreach efforts, motivating a target audience, and alleviating challenges, such as prioritizing limited resources. Multiple federal agencies play a role in preventing the abuse and misuse of prescription pain relievers. Within the Executive Office of the President, ONDCP establishes policies, priorities, and objectives for a national drug control program. ONDCP also oversees several programs related to curbing drug abuse and misuse, including an educational media campaign. In addition, Department of Health and Human Services (HHS) agencies, including FDA, HRSA, NIH, and SAMHSA, have various responsibilities and engage in activities related to preventing the abuse and misuse of prescription pain relievers. FDA is responsible for ensuring the safety and effectiveness of drugs. FDA can require drug manufacturers to take measures to ensure the safety of their products, such as by providing patient and prescriber education materials. FDA also educates patients and providers about appropriate use and potential risks of drugs, including prescription pain relievers, in order to reduce preventable harm from these drugs. HRSA operates the federal Poison Control Program, which provides funds for poison control centers that provide treatment recommendations for poisonings involving prescription drug abuse and misuse. This program also has a campaign that includes public education about the risks of poisoning from prescription pain relievers. NIH, primarily through its component NIDA, provides strategic support for and conducts research on drug abuse and addiction. NIH’s role also includes translating and disseminating this research into materials for public consumption. SAMHSA seeks to direct substance abuse and mental health services to the people most in need and promote use of evidence-based practices in these areas in the general health care system. In particular, the agency seeks to educate the public and prescribers about issues related to substance abuse in an effort to prevent such abuse and reduce its prevalence. Finally, within the Department of Justice, DEA is responsible for enforcing the Controlled Substances Act and related regulations. One of DEA’s roles is to control the quantity of schedule I and II controlled substances produced or procured each year in the United States, which it does by establishing annual quotas for U.S. manufacturers. (See app. II for a description of DEA’s process for setting quotas for controlled substances.) The agency also supports nonenforcement programs aimed at reducing the illicit use of controlled substances, including education about prescription drug abuse and misuse and diversion. To monitor trends in the extent of prescription pain reliever abuse and misuse, federal agencies rely on data obtained from four nationally representative data sources. Three of these data sources measure adverse health consequences related to abuse and misuse, and the fourth is a national household survey of drug use. Although these data sources do not directly measure abuse and misuse, when used together, they provide a more complete view of the problem of prescription pain reliever abuse and misuse than any of the data sources individually. Therefore, we refer to national data from these four data sources as key measures of prescription pain reliever abuse and misuse. The data sources used by federal agencies are: DAWN, a public health surveillance system operated by SAMHSA, collects information on emergency department visits in the United States. DAWN staff review emergency department medical records from a nationally representative sample of hospitals to identify and gather information on visits in which drugs were involved, including visits where drugs were a direct cause and visits where drugs were a contributing factor. TEDS, compiled by SAMHSA, gathers data from substance abuse treatment facilities in the United States on the demographic characteristics and substance abuse problems of those aged 12 or older admitted for treatment. NVSS, operated by CDC, contains vital statistics data, including mortality data, such as causes of death, obtained from death certificates filed for every death from every jurisdiction in the United States. NSDUH, an annual household survey sponsored by SAMHSA, gathers self-reported information on the use of illicit drugs (including the “nonmedical use” of prescription drugs), alcohol, and tobacco in the civilian, noninstitutionalized population of the United States aged 12 years old or older. See appendix III for more information about the data collection methodologies and limitations of these data sources. Key measures of prescription pain reliever abuse and misuse increased from 2003 to 2009. The largest increases were in measures of adverse health consequences, though increases were not consistent across all measures. Federal officials suggested that increasing availability of prescription pain relievers and increasing high-risk behaviors by those who abuse or misuse the drugs, such as combining prescription pain relievers with other drugs or alcohol, likely contributed to the rise in adverse health consequences, though data about the reasons for the increases are limited. Key measures of prescription pain reliever abuse and misuse increased from 2003 to 2009, though increases were not consistent across all measures. All three measures of adverse health consequences that we examined increased substantially in the U.S. population during the period we reviewed (see fig. 1). The estimated number of emergency department visits annually related to prescription pain reliever abuse and misuse increased by 142 percent from 2004 to 2009, an estimated increase of 288,000 visits. Admissions to substance abuse treatment facilities annually for prescription pain reliever abuse and misuse increased by 131 percent, or 133,000 admissions, from 2003 to 2009. The annual number of deaths resulting from unintentional overdoses of prescription pain relievers increased by 83 percent, equivalent to more than 5,000 deaths, from 2003 to 2008. While these measures of adverse health consequences increased substantially, according to NSDUH survey data, the percent increase in the estimated number of people nationwide who abused and misused prescription pain relievers—another key measure of prescription pain reliever abuse and misuse—was relatively slight during the period we reviewed. In 2003, an estimated 11.7 million people reported abusing or misusing prescription pain relievers at some point over the past year, and this number increased by 6 percent to 12.4 million people in 2009.Appendix V shows data from the key measures by age group for each year that we reviewed. Although information about the reasons for the substantial increases in adverse health consequences is limited, agency officials suggested that increasing availability of prescription pain relievers, especially extended- release and long-acting pain relievers, and increasing high-risk behaviors by those who abuse or misuse the drugs were likely contributors to the increased adverse health consequences related to prescription pain reliever abuse and misuse from 2003-2009. Over this time period, the number of prescriptions dispensed from U.S. pharmacies for prescription pain relievers increased by 32 percent—from 195 million prescriptions in 2003 to 257 million prescriptions in 2009—which CDC, FDA, and NIH officials attributed to factors such as an increased focus on pain management. Officials from a number of agencies noted, however, that while most prescription pain relievers are used as prescribed, a fraction of all prescribed pain relievers are abused and misused.not allow officials to determine what fraction of prescription pain relievers are abused and misused. From 2003 to 2009, prescriptions for extended-release and long-acting pain relievers increased from approximately 15 million to 23 million. Estimates of dispensed prescriptions are from FDA’s analysis of SDI, Vector One®:National, extracted June 2010. the total number of prescriptions dispensed.agencies we interviewed said that they are most concerned about extended-release and long-acting pain relievers, but they noted that immediate-release pain relievers are also abused and misused. Officials from multiple agencies noted, however, that data on what drug formulations are being abused and misused are limited because most measures of abuse and misuse do not gather information on the particular drug formulation involved in a case of an adverse health consequence or self-reported abuse and misuse. Officials from most of the A second factor that officials from several agencies said likely contributed to the increases in adverse health consequences is that more individuals may be engaging in high-risk behaviors when they abuse or misuse prescription pain relievers, though data on the extent of high-risk behaviors are limited. One high-risk behavior officials pointed to was combining prescription pain relievers with other substances, such as another prescription pain reliever, alcohol, or other drugs. Taken together, the interactions of such substances can lead to increased risk of life- threatening conditions. From 2004 to 2009, the number of emergency department visits that involved combining prescription pain relievers with other substances increased by an estimated 200,000 visits, while the number of emergency department visits involving a prescription pain reliever alone increased by an estimated 88,000 visits.officials from several agencies told us that their understanding of how drugs are used in combination is limited by the available data. For example, NSDUH, which reports estimates of abuse and misuse based on a nationwide survey, does not ask survey respondents which substances they use in combination. In addition, NVSS data on unintentional overdose deaths are limited by the amount of detail listed on death certificates. Not all substances involved in a death may be listed on However, a death certificate, especially when a toxicology report is not used to determine the cause of death. CDC officials said that whether a toxicology report is used to determine the cause of death varies by jurisdiction, and currently, the number of postmortem examinations, which may include toxicology reports, is declining. Officials from multiple agencies said that another high-risk behavior that may be leading to increased adverse health consequences is inhaling or injecting the drugs, rather than taking them orally as prescribed. From 2003 to 2009, the percentage of admissions to substance abuse treatment facilities where the admitted individual reported usually abusing or misusing prescription pain relievers through inhaling the drugs increased from 9 percent to 16 percent of cases. The percent of admissions where the admitted individual reported using the drugs orally decreased from 72 to 69 percent of cases, while the percent of admissions using the drugs in other ways was stable. NIH has reported that inhaling and injecting drugs is more dangerous than taking them orally as prescribed. Inhaling or injecting the drugs delivers drugs more quickly to the brain and can increase the risk of addiction and overdose. FDA, NIH, and SAMHSA are using a variety of strategies to fill the gaps federal agencies have identified in prescriber education related to treating pain, prescribing opioids appropriately, and identifying substance abuse in their patients, but officials told us that more education is needed. Strategies that these agencies pursued in fiscal year 2011 include developing CME programs, requiring training and certification in order to prescribe certain drugs, organizing physician mentoring networks, and developing curriculum resources for future prescribers. First, FDA, NIH, and SAMHSA are using voluntary CME programs to educate prescribers about issues related to prescription pain reliever abuse and misuse. CME programs are educational activities which serve to maintain, develop, and increase the knowledge, skills, and professional performance and relationships physicians use to provide services to patients. Many state medical boards require prescribers to complete a certain number of CME credits for license re-registration. FDA is requiring manufacturers to develop a CME or continuing education (CE) course for prescribers as part of a Risk Evaluation and Mitigation Strategy (REMS) for extended-release and long-acting prescription pain relievers.completion of the course, which is expected to be implemented in early 2012, will be voluntary, FDA is requiring manufacturers to propose performance goals for the percentage of prescribers who complete it. NIH is undertaking a different approach to using CME programs to educate prescribers about identification of substance abuse in their patients, reaching out to prescribers at medical conferences across the country. Using a live theater CME format, NIH uses a dramatic reading of a portion of the play Long Day’s Journey into Night that focuses on a character’s morphine addiction, an expert panel reaction, and a facilitated audience discussion to highlight issues like incorporating screening, brief intervention, and referral to treatment into primary care settings. Finally, SAMHSA developed a CME course on prescribing opioids for chronic pain and partners with local host organizations, such as local medical organizations and state agencies, to offer it across the United States. The course is targeted at physicians, dentists, and other prescribers and can be modified to reflect the needs of the local host organization. Another strategy FDA uses is requiring prescribers of certain prescription pain relievers to be trained and certified in order to prescribe them. As of October 2011, all five marketed transmucosal immediate-release fentanyl products had REMS with prescriber training and certification components due to unique concerns associated with these products. In order to become certified to prescribe these drugs for outpatient use, prescribers must review written materials, successfully complete a knowledge assessment, and register with the manufacturer of the drug by completing a prescriber enrollment form, which includes a commitment to complete a patient-prescriber agreement with each new patient.required to become recertified every 2 years. A third strategy NIH and SAMHSA are pursuing is providing funding to develop physician clinical support systems, which provide educational resources and free, nationwide mentoring services related to prescribing prescription pain relievers. As of October 2011, two physician clinical support systems had been funded: one to assist practicing physicians interested in implementing substance abuse screening in their practices and one related to the appropriate use of prescription pain relievers for Each system links physicians to trained the treatment of chronic pain. clinical advisors who can provide telephone or e-mail responses to specific questions and offer support using the educational resources. SAMHSA also funds two additional physician clinical support systems related to the use of methadone and buprenorphine in addiction treatment, which are outside the scope of our review. residents to screen, treat, and refer patients with substance use disorders. As of October 2011, the medical schools had developed 10 curriculum resources, 5 of which are specific to prescription drug abuse and misuse. In addition, NIDA is taking the lead on a project with participation from more than 10 institutes and centers within the NIH to establish the NIH Pain Consortium Centers of Excellence in Pain Education. These Centers of Excellence aim to develop curricula that will educate medical students about best practices in the treatment of pain by fiscal year 2014. SAMHSA is also facilitating the development of curricula for training medical residents. Through its Screening, Brief Intervention, Referral and Treatment Medical Residency Program, medical residency programs are developing curricula and clinical training for identifying substance use disorders, including training about issues related to prescription drug abuse and misuse, and incorporating the curricula and clinical training into 16 residency programs. The curriculum resources for these programs are designed to be transferable to medical schools and residency programs nationwide. Despite various ongoing strategies to educate current and future prescribers about issues related to prescription pain reliever abuse and misuse, officials from each of the federal agencies we spoke with told us that more prescriber education is necessary. ONDCP officials indicated that they—with technical assistance from DEA, FDA, and SAMHSA—are working to develop a legislative proposal to require that all prescribers who request DEA registration to prescribe controlled substances be trained on the appropriate and safe use, proper storage, and disposal of prescription pain relievers as a precondition of registration. Currently, in order to register with DEA to prescribe a controlled substance, prescribers must hold a valid state license.agencies we spoke with expressed support for mandatory prescriber education, with some noting that this would ensure that all prescribers were starting from the same baseline of knowledge. Officials from one agency expressed support for promoting the education of all prescribers through other means, such as working with state medical boards. Officials from several agencies explained that more prescriber education is Officials from many of the necessary because the majority of educational strategies that federal agencies are currently pursuing are voluntary and may not reach the majority of either current or future prescribers. Because training could help prescribers feel more comfortable prescribing these drugs, ONDCP officials also explained that mandatory prescriber education could improve access to prescription pain relievers for patients with a legitimate need for pain relief. Officials from ONDCP also noted that by mandating education for all prescribers, rather than only for those who prescribed extended-release and long-acting prescription pain relievers, they could avoid a possible situation in which some prescribers would be unable to prescribe certain pain relievers because they had chosen not to take the training. Representatives of the American Medical Association told us that they, along with a number of other associations representing prescribers, favored the use of positive incentives—such as a reduction in the $551 fee prescribers pay to DEA when registering to prescribe controlled substances—to encourage prescribers to complete voluntary education about these issues, rather than mandating such education. The American Medical Association representatives noted that the contribution that poor prescribing practices or fraudulent activity on the part of prescribers makes to the supply of prescription pain relievers that are diverted for abuse and misuse is unknown. As a result, they told us that it is unclear whether mandatory prescriber education would have an effect on prescription pain reliever abuse and misuse. All federal agencies used almost all of the key practices for developing consumer education efforts, which varied in size, scope, and duration. Agencies also varied in how they used key practices when developing these efforts. All agencies established metrics to monitor the implementation and functional elements of their educational efforts, but only two agencies have established or are planning to establish metrics to assess the impact of their efforts on audiences’ knowledge, attitudes, and behavior. All federal agencies used almost all of the key practices for developing consumer education efforts when developing the efforts that we reviewed to educate the general public about prescription pain reliever abuse and misuse. In fiscal year 2011, five agencies operated nine educational efforts targeted at the general public, ranging from websites to brochures to a museum exhibit (see app. IV for full descriptions of the educational efforts). Our prior work outlines key practices that agencies should engage in when developing public education efforts: define key goals and objectives of the educational effort; analyze the situation, including identifying competing voices or timing considerations; identify stakeholders and clarify their roles; identify resources; research target audiences, including identifying audience characteristics and motivators; develop clear, consistent messages; identify credible messenger(s); design a mix of media, including method and frequency of delivery; and establish metrics to measure success (for further description of these key practices, see app. III). Our review of initiatives to educate the general public about prescription pain reliever abuse and misuse shows that all of the agencies used almost all of these practices when developing their initiatives (see fig. 2). For instance, FDA relied on seven of nine key practices when developing outreach materials for its Opioid Public Service Announcements. Other agencies used more key practices when developing their education efforts; ONDCP relied on all nine key practices for developing consumer education efforts when it developed the prescription drug content for the National Youth Anti-Drug Media Campaign. Agencies varied in how they used the key practices for developing public education efforts. For instance, SAMHSA used the key practice of developing consistent, clear messages by convening Project Advisory Teams composed of external stakeholders and subject matter experts. The teams met several times during the development of two phases of Not Worth the Risk; Even if it’s Legal and gave input to SAMHSA about effective messaging and distribution channels for the program’s target audiences, which include teens, college students, and “student influencers” (e.g, parents, teachers, health care providers). FDA used a different approach for the same key practice of developing consistent, clear messages for its Opioid Public Service Announcements—which include information about appropriate use, storage, and disposal of medications—relying on internal discussions between staff in its Office of Communications and the Office of New Drugs. Although all agencies used many of the same key practices to develop their educational efforts, the resulting initiatives are different in terms of size, scope, and duration, and agencies dedicated varying amounts of resources to developing their efforts. For instance, NIH’s Heads Up: Real News About Drugs and Your Body provides classroom materials— including magazine articles, student worksheets, and lesson plans—to students and teachers about a range of topics related to drug abuse and misuse, including but not limited to prescription drugs, and has done so each school year since 2002. By contrast, DEA began collecting and disposing of unused prescription drugs through its Take Back Initiative in 2010. The agency developed outreach materials to raise awareness of the event and the materials—including brochures and billboards—are specific to prescription drugs. There is also significant variation in terms of the resources different agencies used for program development. For instance, the budget for the National Youth Anti-Drug Media Campaign in fiscal year 2011 was approximately $35 million, whereas Not Worth the Risk; Even if it’s Legal cost about $80,000 for the last phase of the brochure series.for program development and dissemination in a fiscal year. While all agencies established metrics to monitor the implementation and functional elements of their educational initiatives, only two agencies have established or planned to establish metrics to assess the impact of their initiatives on audiences’ knowledge, attitudes, and behavior with regard to prescription pain reliever abuse and misuse. The former, known as process metrics, monitor the operational elements of educational efforts, such as the quantity or volume of outreach efforts. The latter, known as outcome metrics, are used to assess the impact of the initiative on the desired health or behavior outcome. Our prior work and other guides for developing consumer education efforts note that establishing both process and outcome metrics are critical elements of program development. All federal agencies followed the key practice of establishing process metrics for the public education efforts we reviewed. For instance, DEA tracks the amount of activity and use across the different features and content pages on its websites, Just Think Twice and Get Smart About Drugs, including the most popular search terms and amount of time spent on the components of the websites. DEA also monitors the number of visitors to its museum exhibit about prescription drug abuse and misuse, Good Medicine, Bad Behavior, and records the number of group visits by category (e.g., schools, universities, or senior citizens). ONDCP and NIH were the only agencies that followed the key practice of establishing outcome metrics for their education efforts. ONDCP measures outcomes from its National Youth Anti-Drug Media Campaign (the Campaign) on a weekly basis through ongoing tracking studies. The tracking studies survey 100 teens each week about awareness of the Campaign and their attitudes, beliefs, and intentions regarding drug use, including where and how teens interact with the Campaign’s website, and attitudes after interacting with the website.contract to evaluate the Campaign’s contribution to preventing drug abuse among young people in the United States, in particular by assessing the Campaign’s impact on knowledge, attitudes, beliefs, and ONDCP also awarded a behavioral intention about drug use.as of September 2011, the contract was being terminated because the Campaign has not been funded for fiscal year 2012. NIH plans to evaluate outcomes for its NIDA for Teens website by surveying students and teachers about students’ knowledge acquisition and attitude change after exposure to NIDA for Teens, as well as teachers’ opinions on the utility of the website. Although officials from agencies that did not establish outcome metrics told us that they recognize the importance of evaluating public education efforts, they cited challenges measuring the impact of such efforts and lack of financial resources as reasons for not assessing program outcomes. For instance, one official explained that an outcome evaluation for his agency’s drug education program would cost more than developing and implementing the program itself. However, ONDCP indicated that, Beyond the key practices for developing public education efforts, our prior work notes that using existing evidence to inform public health communications, such as research on teen messaging or evaluations of related efforts, can also be helpful in analyzing the effectiveness of educational efforts in addition to establishing outcome metrics. developing their public education efforts, agencies incorporated evidence- based strategies when possible, but limited evidence exists about how to successfully educate the public about prescription pain reliever abuse and misuse. For instance, research shows that teens may mistakenly believe that prescription pain relievers are safer than illicit drugs. As a result, officials from several agencies told us that they seek to dispel this misperception in their educational efforts. In addition, NSDUH data also indicate that most people who abuse or misuse prescription pain relievers get the drugs from a friend or family member. Thus, DEA officials told us that they have sought to educate the public about proper drug storage and disposal in order to limit the amount of drugs that are available to be diverted from medicine cabinets for abuse and misuse. GAO, Program Evaluation: Strategies for Assessing How Information Dissemination Contributes to Agency Goals, GAO-02-923 (Washington, D.C.: Sept. 30, 2002). However, officials from multiple agencies explained that because there are distinct challenges when designing educational efforts about prescription pain reliever abuse and misuse compared to other drug prevention efforts, more research is needed in order to understand how to craft effective messages, particularly for teens. Officials said that education about prescription pain reliever abuse and misuse requires a more nuanced approach because there are legitimate medical uses for these products. In addition, officials from several agencies noted that educational efforts should avoid inadvertently alerting people to the possibility of using these drugs to get high. The motivations for abusing and misusing prescription drugs can also be different than the motivations for using illicit drugs, such as self-medicating for pain relief, and understanding how to effectively target the variety of reasons people abuse and misuse prescription drugs is another area that requires more research, according to agency officials. The Surgeon General, with support from other federal agencies, is currently developing a Call to Action on youth prescription drug abuse that will discuss available evidence to support prevention strategies, including educational efforts. An official from the Office of the Surgeon General told us that the Call to Action, which is anticipated for release in February 2012, will also identify gaps in existing research related to youth prescription drug abuse prevention strategies and call for further research to be conducted to fill these gaps. There are several similarities among agencies’ efforts, target audiences, and mediums across the nine public education initiatives and nine prescriber education programs we identified. Officials said that these similarities in public education efforts are beneficial in addressing prescription drug abuse and misuse because having multiple, reinforcing messages about the same subject is valuable in public health communications and because federal agencies provide slightly different perspectives on the issues surrounding prescription drug abuse and misuse. Likewise, the prescriber education programs we identified, though similar, are different in content and focus. Although these similar programs have the potential to be duplicative if not effectively coordinated, federal agencies have recently begun to coordinate their educational efforts. Nevertheless, federal agencies have missed opportunities to pool resources—a key practice for effective coordination—among similar education efforts, which may have resulted in lost opportunities to obtain additional benefits through coordination. Among all nine federal initiatives to educate the general public about prescription pain reliever abuse and misuse that we reviewed, there are several instances of agencies engaging in similar efforts (see table 1). Officials told us that it is beneficial to have similar education efforts about prescription pain reliever abuse and misuse because of the complex nature of this problem and the fact that agencies provide different but reinforcing messages about the issue. For example, three initiatives—Just Think Twice, NIDA for Teens, and the National Youth Anti-Drug Media Campaign—use the same medium to target teens with similar messages about prescription drug abuse and misuse. These efforts provide web-based information and interactive features to educate teens about prescription drug abuse and misuse. (See fig. 3 and fig. 4 for examples of web-based efforts to educate teens about prescription drug abuse and misuse.) Officials working on these efforts noted that they chose to focus on teens because drug abuse typically starts during teen years. NIH officials told us that prescription and over-the-counter medications account for most of the drugs commonly abused by 12th graders as well, after alcohol, tobacco, and marijuana. Teens are also more vulnerable to the negative effects of drug use since their brains are still developing. There are also two initiatives—DEA’s Get Smart About Drugs and ONDCP’s National Youth Anti-Drug Media Campaign—that use the same medium to target parents with similar messages about prescription drug abuse and misuse. For instance, both use websites that have interactive features that show parents where teens commonly access prescription drugs in the home. Both sites also include tips for parents about how to talk to teens about drugs and about how to identify signs of abuse. Officials acknowledged these instances of similar goals and similar strategies to reach the same audience among educational efforts. However, officials said that these similarities are beneficial in addressing prescription drug abuse and misuse. Officials from NIH, FDA, DEA, and ONDCP noted that having multiple, reinforcing messages about the same subject is valuable in public health communications, particularly about an issue as complex as prescription drug abuse and misuse. The National Council on Patient Information and Education also told us that repetition and frequent delivery of information supports message reinforcement. Second, federal agencies have their own constituencies and each approaches prescription drug abuse and misuse from a slightly different perspective. For instance, NIDA for Teens provides a science-based perspective and includes information about how prescription drugs affect the brain. Specifically, the “Mind Over Matter” series on the NIDA for Teens website explains how prescription drugs mimic neurotransmitters to alter the brain’s chemistry. DEA’s Just Think Twice, on the other hand, provides more information about the legal consequences of abusing drugs, such as losing federal student loans, and the culture of drug abuse, including images of drugs and true stories of youth overdose deaths. NIH officials told us that both perspectives are important as some members of the public may go to NIH’s NIDA for information about these issues, while others may go to DEA. Officials also said that they cross- reference each other’s information when appropriate. For instance, ONDCP links to publications from NIH’s NIDA on one of the National Youth Anti-Drug Media Campaign websites. In addition to the similarities among the nine targeted educational efforts we reviewed, agencies are engaged in additional efforts outside the scope of our review which, taken together, may present areas of potential These additional efforts include posting materials from duplication.retired initiatives online, planning future efforts, and providing factual information about prescription pain relievers. For instance, FDA and SAMHSA have brochures and posters for teens and the elderly with messages about prescription pain reliever abuse and misuse that are no longer actively disseminated, but are still available on their websites. HRSA and, contingent on available resources, DEA are also planning to launch new or update existing educational initiatives in the next fiscal year, targeted at the elderly and parents, respectively. ONDCP is also planning to work with agencies and external stakeholders to develop and implement national public education campaigns on prescription drug abuse and misuse and drug storage and disposal, by April 2013. ONDCP officials told us that, as of October 2011, they were still considering various options and working to identify resources for these campaigns. Finally, three federal agencies have prescription drug fact pages on their main websites and FDA oversees the dissemination of drug information to patients through tools such as medication guides that are provided with some prescription drugs. Given the number of agencies involved in educating the public about prescription pain reliever abuse and misuse and the number of efforts currently under way, these additional efforts represent areas where there may be the potential for duplicative programming, if such efforts are not effectively coordinated. There are also similar target audiences and mediums among the nine prescriber education programs we identified, although the content and focus of these programs is different (see table 2). For example, two CME courses, SAMHSA’s Prescribing Opioids for Chronic Pain course and FDA’s requirement for prescriber education through its extended-release and long-acting opioid REMS, are both targeted at current prescribers. Though these courses have some similar content about patient selection and monitoring, FDA officials noted that prescriber education through REMS will focus on extended-release and long-acting products, whereas the SAMHSA course includes information on both extended-release and immediate-release pain relievers. They also noted that the SAMHSA course is more focused on addiction and treatment than the REMS materials will be. As a result, prescribers are being educated about the full range of issues related to prescription pain reliever abuse and misuse, including treating pain, appropriate prescribing, and recognizing substance abuse in their patients. Federal agencies use three main mechanisms—two mechanisms that are overseen by ONDCP, the National Drug Control Strategy and the Prescription Drug Abuse Prevention Plan, and one mechanism within HHS, the HHS Behavioral Health Coordinating Committee’s Subcommittee on Prescription Drug Abuse—to coordinate their educational efforts. The agencies have also begun using key practices for coordination that we have identified in prior work on practices that help enhance and sustain collaboration. ONDCP releases the National Drug Control Strategy (the Strategy) on an annual basis and it outlines the administration’s goals and priorities for reducing the rate of drug abuse and misuse and the associated consequences. The Strategy, which outlines drug control policies and programs for illicit and prescription drugs, serves as a coordination mechanism for drug control agencies and incorporates several key practices for interagency collaboration. For instance, the Strategy defines and articulates a common outcome by establishing the administration’s goals for reducing drug abuse and misuse. The Strategy also provides a means for agencies to agree on roles and responsibilities by listing specific actions for agencies to take, including identifying lead and partnering agencies for each action item. For example, “Enhance Healthcare Providers’ Skills in Screening and Brief Intervention” is an action item in the 2010 Strategy and it specifies that SAMHSA is the lead agency, with NIH’s NIDA, HRSA, and the Indian Health Service listed as partnering agencies. Finally, the Strategy provides a means to monitor, evaluate, and report on results for collaborative efforts. Agencies developed objectives and 1- and 2-year milestones for the action items in the 2010 Strategy and they submit regular progress reports to ONDCP. The Prescription Drug Abuse Prevention Plan (the Plan) was recently released by ONDCP and complements the Strategy by outlining the administration’s approach to addressing prescription drug abuse and misuse, in particular. As a result, it serves as a second interagency coordination mechanism for agencies addressing prescription drug abuse and also reflects the key practices for collaboration that we previously identified. The Plan establishes mutually reinforcing or joint strategies by identifying four priority areas for federal efforts to reduce prescription drug abuse and misuse—education, monitoring, proper disposal, and enforcement—and it aligns agencies’ activities around these four areas. For instance, the Plan calls for federal agencies and private stakeholders to work together to develop evidence-based public education campaigns about appropriate use, secure storage, disposal, and abuse of prescription drugs. Like the Strategy, the Plan also provides a means to monitor, evaluate, and report on results. ONDCP asked agencies to submit implementation plans with objectives and 1- and 2-year milestones and to provide progress reports on a quarterly basis. The Plan also calls for the establishment of a Federal Council on Prescription Drug Abuse to coordinate implementation of the Plan. Finally, the HHS Behavioral Health Coordinating Committee’s Subcommittee on Prescription Drug Abuse provides a third coordination mechanism for HHS agencies and demonstrates use of the key practices for collaboration as well. For instance, the Subcommittee defines a common outcome for HHS agencies by identifying five goals related to prescription drug abuse and organizing its activities around these goals.The Subcommittee also provides a means to monitor and report on progress to HHS leadership. Agency officials report on their activities related to prescription drug abuse and misuse to the Subcommittee co-chairs, who then provide updates to HHS leadership. Agencies have begun using these coordination mechanisms, or augmenting existing coordination efforts, within recent years. Beginning in 2009, ONDCP began using a more collaborative process for developing the Strategy, convening a Demand Reduction Interagency Working Group. The working group brought together subject experts from drug control agencies to provide input on the development of the Strategy and DEA, FDA, HRSA, NIH, and SAMHSA all participated. Officials from multiple agencies indicated that this process was more interactive than in years past. The 2010 Strategy also utilized a new approach by outlining specific action items and developing a system to monitor agency progress toward objectives. ONDCP also released the Plan in April 2011 and agency officials described the process for developing the Plan as collaborative. One official described the amount of brainstorming between ONDCP and agencies in order to develop the Plan as “unprecedented.” Finally, the Subcommittee on Prescription Drug Abuse was formed in the summer of 2010. Officials explained that the Subcommittee provides a more regular and formal means of coordination, whereas prior efforts to coordinate within HHS were more irregular and informal. For instance, the Subcommittee helped institutionalize relationships among officials who work on prescription drug abuse and misuse across HHS. Officials noted that although they were previously aware of subject experts at other agencies, they now work together on related tasks through the Subcommittee and therefore have formal working relationships which they can draw on to work through issues. Subcommittee members added that they use their meetings to share programming information. For instance, prior to convening the Subcommittee, NIH officials told us that they were not fully aware of all of the prescriber education efforts across HHS agencies. Now, officials have created a group through the Subcommittee to catalogue related prescriber education programs. While officials from each agency we spoke with said that these coordination mechanisms were working well, ONDCP and other agency officials indicated that they were aware of the potential for creating too many coordinating bodies. Though the Strategy’s Demand Reduction Interagency Working Group, the Plan’s new Federal Council on Prescription Drug Abuse, and the HHS Subcommittee on Prescription Drug Abuse may have membership from many of the same agencies, officials said that they felt that they had not reached the point of too much coordination, noting that current coordination efforts were effective in terms of facilitating information sharing and avoiding overlapping programming among agencies. Although agencies have increased their coordination efforts in recent years, they have missed opportunities to leverage resources—a key practice for effective coordination—among similar education efforts targeting teens, which may have resulted in lost opportunities to obtain additional benefits through coordination. At least four teen initiatives— Just Think Twice, NIDA for Teens, the National Youth Anti-Drug Media Campaign, and Not Worth the Risk; Even if it’s Legal—have obtained feedback from teens and other stakeholders about the features of and messages for educational efforts about prescription drug abuse and misuse that was not shared. For instance, NIH formed a Teen Advisory Group to pretest their messages and also seeks input from local high school students, including groups such as Students Against Destructive Decisions. These focus groups revealed information that could be useful to other teen education efforts, addressing topics such as web and materials design, video content, language and terminology, and messaging. For instance, one focus group revealed that trying to imitate the layout of social networking sites (e.g., MySpace or Facebook) did not make sites more appealing to teen users. DEA, ONDCP, and SAMHSA also get feedback on their teen education efforts. DEA gets feedback from the Drug Abuse Resistance Education— D.A.R.E.—Youth Advisory Board on content for Just Think Twice and also gets feedback from DEA field staff who give presentations to teens about drug abuse. DEA officials told us that the feedback on the website that they received from field staff often varies depending on the part of the country in which the field staff give presentations in schools, with field staff in San Diego reporting different successful approaches than those in Miami. ONDCP also pretests content and features with teens. One lesson derived from ONDCP’s pretesting efforts is that teens liked the option to view content posted by their peers on the website, such as photos or stories. Finally, SAMHSA gets input from professional and student groups through its Project Advisory Team for Not Worth the Risk; Even if it’s Legal. For instance, the Project Advisory Team advised SAMHSA on a number of issues related to addressing prescription drug abuse and misuse among teens, including the importance of acknowledging and validating common stressors teens face in order to establish credibility and to create an opportunity to address alternative coping skills. While each of these agencies obtained feedback on the messages for and features of similar initiatives, agencies did not share the results of their feedback sessions or pretesting efforts with officials from other agencies who work on similar programs. Officials said they did not share the feedback they received for two reasons. First, NIH, DEA, and SAMHSA officials said that they were never asked to do so by other agencies with similar education efforts. Second, ONDCP officials said that they felt that the results of their pretesting would not be useful for other educational efforts. Nonetheless, one official acknowledged that sharing findings from pretesting efforts and other feedback sessions could have been useful when developing the content and messages for their educational effort. NIH officials said that there are two coordination mechanisms through which they could share information among agencies involved with educational efforts in the future. In addition to its Subcommittee on Prescription Drug Abuse, the HHS Behavioral Health Coordinating Committee also has a Communications Subcommittee and NIH officials said that they can use the Communications Subcommittee to share information about the development of educational efforts among HHS agencies. NIH officials said that they also have the opportunity to share information with agencies outside of HHS through weekly phone calls that ONDCP facilitates with communications staff from DEA, NIH, ONDCP, and SAMHSA, among other agencies. Abuse and misuse of prescription pain relievers is a large and growing public health problem in the United States. Although DEA, FDA, NIH, ONDCP, and SAMHSA are engaged in multiple efforts to educate the public about prescription pain reliever abuse and misuse, there is limited evidence about how to craft effective messages about this issue. The agencies agree that education about prescription drug abuse and misuse requires a different approach than other drug prevention efforts, but there is a lack of proven strategies and messages on which agencies can model their own educational efforts to ensure that such efforts will have the desired outcome. In the absence of a strong evidence base, establishing outcome metrics is an especially important key practice to incorporate into the development of educational efforts because outcome metrics provide feedback on the effectiveness of agencies’ efforts at preventing prescription pain reliever abuse and misuse. However, seven of the nine public education efforts that we reviewed did not assess program outcomes. This leaves federal agencies with limited knowledge as to whether such efforts are effective. Given these challenges, there is much to be gained from continued and robust coordination among similar education efforts about prescription pain reliever abuse and misuse. In its role as a coordinating body for federal drug control efforts, ONDCP is uniquely situated to ensure that federal educational efforts are not duplicative and are effectively coordinated. DEA, NIH, SAMHSA, and ONDCP operate similar educational initiatives—including three websites and a brochure series— targeting teens. While agency officials told us that the similar educational efforts we reviewed are reinforcing, it is important that agencies continue to coordinate their efforts as additional planned educational efforts are implemented in order to avoid duplicative programming. Although agencies involved in educating the public have recently increased their coordination efforts, they have missed opportunities to share the results of teen and stakeholder feedback among similar efforts—a key practice for effective coordination. In developing their educational efforts, DEA, NIH, ONDCP, and SAMHSA obtained feedback from their target audience and other stakeholders that could be useful for other agencies to consider in relation to their own efforts. Although each educational effort has unique features, comments from focus group participants and other stakeholders could produce lessons that other agencies could have drawn on if summaries of those comments had been made available to other agencies to review. As additional public education efforts are developed agencies will need to leverage resources, including sharing lessons learned from the development and implementation of existing educational efforts, to ensure that they make the best use of limited resources. In order to ensure that federal efforts to prevent the abuse and misuse of prescription pain relievers are an effective and efficient use of limited government resources, we recommend that the Director of ONDCP take the following three actions: Establish outcome metrics and identify resources for conducting outcome evaluations for the national education campaigns about prescription drug abuse and safe storage and disposal proposed in the Prescription Drug Abuse Prevention Plan. Develop and implement a plan to evaluate outcomes from the proposed national education campaigns. Ensure that federal agencies undertaking similar educational efforts leverage available resources and use coordination mechanisms to share information on the development of their efforts. We provided a draft of this report to ONDCP, the Department of Justice, and HHS for their review and comment. In written comments, reproduced in appendix VI, ONDCP did not explicitly agree or disagree with our recommendations, but noted that it will continue to work for improved coordination of prescription drug abuse educational efforts and evaluation of outcomes. ONDCP also stated that the prescription drug abuse educational efforts that we reviewed target different populations and address different messages, and suggested that we explain the differences among these efforts in our report. We revised our report to include an additional reference to our detailed descriptions of the various educational efforts we reviewed, which explain the scope, target audiences, and mediums used among the educational efforts. We also included additional information about the size and scope of ONDCP’s National Youth Anti-Drug Media Campaign. ONDCP, DEA, and HHS also provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to the Director of the Office of National Drug Control Policy, the Attorney General, and the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. In order to help address the abuse and misuse of prescription pain relievers, drug manufacturers are developing formulations of these drugs that are specifically designed to deter abuse. We are presenting information on different types of abuse-deterrent formulations of prescription pain relievers, whether they are being used in fiscal year 2011, and challenges related to these products. This appendix is based on our review of scientific literature and Food and Drug Administration (FDA) and manufacturer documents, as well as interviews with FDA officials and representatives of Purdue Pharma L.P., the manufacturer of OxyContin. Manufacturers of prescription pain relievers have long sought to achieve a balance between creating drugs that are effective for therapeutic use while minimizing their potential for abuse. The scientific articles noted that, in general, abusers seek out drugs that can be smoked, snorted, or taken intravenously, thus providing a more rapid onset of the effects of the drug. Therefore, some manufacturers of prescription pain relievers have focused on making their products tamper resistant, so that the physical or chemical makeup or delivery system of the drug cannot be altered, with the goal of preventing users from accessing and abusing the active ingredient. There are five different types of abuse-deterrent formulations manufacturers have developed to reduce tampering and abuse of their products, though some drugs may incorporate multiple types. Some of these abuse-deterrent formulations are already being utilized in prescription pain relievers, others are being incorporated into pain relievers that are in the process of being developed, and others have been used in other types of products, but not prescription pain relievers. In general, the different classifications include: Physical/Chemical Barriers – Such barriers impart physical or chemical properties to a drug so that it resists manipulation via chewing, grinding, and mixing with alcohol or other common solvents, thus making extraction of the active ingredient difficult. A reformulated version of the prescription pain reliever OxyContin that is currently marketed uses this type of barrier to deter abuse. Agonist/Antagonist Combinations – Combinations of agonists and antagonists, which mitigate, block, or reverse the effect of the agonist (the opioid), if manipulated. Two prescription pain relievers that were marketed in fiscal year 2011 use this barrier to deter abuse: Talwin Nx and Embeda (see sidebar). Aversion – A combination of substances designed to produce an unpleasant effect if a tampered form is ingested or a higher dosage than directed is used. For example, one formulation designed in the past added niacin to a prescription pain reliever to dissuade abusers because, in high doses, niacin causes headache, sweating, chills, flushing, and general discomfort. While at least one manufacturer has designed prescription pain relievers using the aversion method of abuse-deterrence in the past, FDA officials told us that no such prescription pain relievers are currently marketed. Delivery System – The method of delivery or drug release design can be used as an abuse deterrent. For example, a depot injection—an injection that releases its active ingredient over a sustained period—or subcutaneous implant can be more difficult to tamper with. FDA officials told us that they were not aware of any prescription pain relievers currently marketed that were designed to be abuse-deterrent by way of a delivery system. Prodrug – Prodrug compounds must undergo biotransformation to activate the active ingredients. For example, they may be formulated so that they are only activated if they are metabolized in the digestive system, so that the drug will not be activated if, for example, it is taken intravenously. FDA officials told us that they were not aware of any prescription pain relievers currently marketed that were designed to be abuse-deterrent by way of a prodrug design. Manufacturers face a number of challenges related to abuse-deterrent formulations of prescription pain relievers. First, there are technical challenges in developing formulations of prescription pain relievers that deter abuse, but still have the intended effect of providing pain relief. For example, it took Purdue Pharma L.P. approximately 9 years to develop a reformulated version of OxyContin that both effectively provided pain relief and displayed abuse-deterrent properties. Another challenge for manufacturers relates to the extent to which they will be allowed to market their new products as reducing or deterring abuse. FDA officials told us that until postmarketing studies demonstrate a product’s effectiveness in reducing abuse in the general population, manufacturers cannot market their products accordingly, but they are allowed to make marketing claims based on the product’s abuse-deterrent features as demonstrated in clinical trials. For example, Embeda’s label includes information on the results of clinical trails testing its abuse-deterrent features, but also states that the abuse-deterrent characteristics of the product “have not been shown to reduce the abuse liability of Embeda.” Finally, a manufacturer indicated that insurers may be reluctant to provide coverage for abuse-deterrent formulations of drugs when less expensive, nondeterrent alternatives are available and that this could minimize their usage and ultimately, their impact on abuse in the general population. FDA faces a number of challenges related to approving and assessing the safety and effectiveness of abuse-deterrent formulations of prescription pain relievers. According to FDA officials, one of these challenges is balancing the abuse-deterrent properties of a drug with its safety in the general patient population. Another challenge for FDA is developing standards and methods for determining if the products are, in fact, abuse deterrent. FDA officials told us that the agency is currently developing guidance for manufacturers on the development of abuse- deterrent formulations and on the postmarket assessment of their performance. However, officials told us that standard guidance is difficult to develop because potential types of abuse-deterrent formulations are so varied that the criteria used to evaluate one drug may not be applicable for another. FDA indicated that it requires manufacturers of prescription pain relievers that want to make abuse-deterrent claims about their products to conduct postmarket epidemiological studies to assess the effectiveness of their drugs in deterring abuse in the general population. However, in developing methods for assessing the effectiveness of a particular drug on deterring abuse, FDA officials told us that they, like manufacturers, are challenged by limitations in available data. FDA officials said that consistent and clear definitions of abuse across data sources are lacking and that most data sources with information on prescription pain reliever abuse do not distinguish between products from different manufacturers, which can make it difficult to assess the effectiveness of a specific drug in deterring abuse. For example, data sources that measure abuse and its consequences may not distinguish between OxyContin and other drugs that contain oxycodone. Further, FDA is challenged in determining what degree of decrease in some measurable outcomes of abuse would be sufficient to label a drug as being able to reduce actual abuse. Finally, there are inherent challenges related to abuse-deterrent formulations of prescription pain relievers and their overall impact on abuse. First, a drug that deters one type of abuse might not necessarily deter another type of abuse. For example, the new formulation of OxyContin is designed to deter abuse via injection or snorting (see sidebar), but is not a deterrent for those who abuse the product via oral ingestion of whole tablets. A related challenge is that while technology may deter the abuse of one particular prescription pain reliever, an abuser may instead seek another prescription pain reliever (either a different formulation of the same pain reliever or a different pain reliever altogether) that is not designed to be abuse deterrent. An abuser may even seek out another opioid, such as heroin. As part of its responsibilities related to enforcing the Controlled Substances Act, the Drug Enforcement Administration (DEA) sets limits, called quotas, on the quantity of schedule I and II controlled substances that may be produced in the United States in any given calendar year. Quotas are a component of the closed system of distribution that exists under the Controlled Substances Act. In this appendix, we present an overview of the closed system, as well as information on the three types of quotas: aggregate production quotas (APQ), bulk manufacturing quotas, and procurement quotas. The information in this appendix is based on our review of DEA documents and interviews with DEA officials. Under the Controlled Substances Act, DEA maintains a closed system requiring any person who manufactures, dispenses, imports, exports, or conducts research with controlled substances to register with DEA (unless exempt), periodically inventory all stocks of controlled substances, provide effective security controls, and maintain records to account for all controlled substances manufactured, imported, exported, received, distributed, or otherwise disposed of. DEA officials said that the closed system, including quotas, is designed to reduce the amount of pharmaceuticals that are diverted for illicit purposes, while also ensuring an adequate and uninterrupted supply of controlled substances for legitimate medical needs. They said that both legitimate and illegitimate users of prescription pain relievers often acquire the drugs from the same source—from doctors or other practitioners who prescribe or dispense them. For example, diversion of prescription pain relievers may occur through methods such as doctor shopping, thefts from medicine cabinets, improper prescribing, and forged prescriptions. According to DEA, quotas are a tool used at the beginning of the closed system to manage and prevent diversion of controlled substances, such as the substances used to make prescription pain relievers, during their legitimate scientific, medical, and industrial applications. While DEA is authorized to control the overall amount of controlled substances available, according to DEA officials, it is ultimately for practitioners and their regulating bodies to ensure that these substances are prescribed appropriately. While officials said that they do seek to account for known diversion when setting APQs, they said that establishing quotas based on known diversion for the purpose of reducing the availability of prescribed drugs will not appreciably affect diversion at the retail level and may prevent legitimate patients from having access to medication for legitimate medical needs. DEA officials said that the APQ is the first type of quota that DEA sets for The APQ specifies the maximum amount of each basic class each year.of controlled substance listed in schedule I or II that can be produced for specified needs in the United States in a given year, thus limiting the amount of bulk raw materials available for use in the manufacture of prescription pain relievers. For example, methadone is a controlled substance that is used in the manufacture of drugs for addiction treatment as well as in the manufacture of the prescription pain relievers Dolophine and Methadose, and multiple generic equivalents. In 2010, DEA set the final APQ for methadone at 20,000,000 grams. Therefore, this is the maximum amount of methadone that could be available for manufacturing addiction treatment drugs and prescription pain relievers that use this substance, as well as for other authorized uses, in the United States in 2010. DEA officials said that they consider data from many sources when determining the APQ, including estimates of the legitimate medical need for each substance from FDA, estimates of retail consumption based on prescriptions dispensed from IMS Health, companies’ production history and forecasts, data from DEA’s own internal system for tracking controlled substances transactions, and past quota histories for each substance.professional expertise and experience when considering all available data to recommend the appropriate APQ for a substance. DEA then publishes the proposed APQ for each substance for the following calendar year in the Federal Register, and, after receiving and reviewing comments, DEA publishes a final order determining the APQ for that year. DEA can revise the APQ midyear if legitimate changes in U.S. manufacturing requirements, such as increased sales or exports, new manufacturers entering the market, new product development, or product recalls, warrant a change. For example, in 2010, DEA revised the APQ for methadone, decreasing it from an initial APQ of 25,000,000 grams to the revised APQ of 20,000,000 grams. DEA officials said that DEA scientists draw on their Officials said that when determining the APQ they also consider losses of controlled substances that occur through diversion activities by considering known and reported thefts and losses, case seizures, and information from national databases of drug evidence, such as analysis from DEA and other forensic laboratories and law enforcement entities. DEA can reduce the APQ based on the quantity of seized or diverted material. The APQ may also be decreased because of DEA enforcement actions that impact sales data, such as by shutting down rogue pain clinics, thus reducing the amount of controlled substances purchased by such entities. DEA officials said that because sales data are one factor considered in determining APQ, these actions may ultimately lead to a reduction in the APQ. DEA said that in rare instances the APQ may also be increased as a result of diversion activities. For example, if a large quantity of a controlled substance is stolen from a manufacturer, the APQ may need to be raised to ensure that sufficient quantities of that substance will be available to meet the nation’s ongoing legitimate medical needs. When determining the following year’s APQ, DEA considers such circumstances to ensure that the APQ remains at an appropriate level to meet legitimate need and may reduce the APQ in relation to the previous year’s to account for the known diversion. In addition to the APQ, DEA sets two types of quotas for individual companies: bulk manufacturing quotas and procurement quotas. Bulk manufacturing quotas limit the amount of a basic class of schedule I or II controlled substance that an individual company can extract or synthesize from plant material or other controlled substances. According to DEA officials, once the initial APQ for a substance for a calendar year is set by DEA, individual companies apply to DEA for bulk manufacturing quotas for specific controlled substances to produce the bulk raw materials that are used in prescription pain relievers for that same year. Officials said that separate quotas are issued for each DEA-registered facility that manufactures a controlled substance, even if the same company operates multiple manufacturing facilities. In 2010, five facilities received bulk manufacturing quotas for methadone. The quota levels ranged between 4 grams at the low end and 12,000,000 grams at the highest levels. The sum of the bulk manufacturing quotas for all companies for a particular controlled substance cannot exceed the APQ for that substance in a given year. DEA officials said that they use a variety of data sources—including internal DEA data, IMS Health data, and data provided by the company— to determine the bulk manufacturing quota for a company. DEA’s Office of Diversion Control also reviews the company for any pending administrative, civil, or criminal action. DEA officials said that DEA scientists draw on their professional expertise and experience when considering all available data to recommend an appropriate bulk manufacturing quota, which is then issued to the company by letter. Bulk manufacturing quotas can be revised through a process similar to that used in setting the initial quotas, in that a company submits an application to revise its quota and must include supporting documentation. DEA officials said that DEA does not generally initiate changes to bulk manufacturing quotas on its own. DEA also establishes procurement quotas, which limit the amount of a basic class of schedule I or II controlled substance that an individual company can procure from a manufacturer of bulk raw materials in order to manufacture individual dosage units of a medicine, such as a prescription pain reliever. Individual companies must apply to DEA for procurement quotas for each specific basic class of controlled substance, and DEA officials told us that separate quotas are issued for each facility that procures a controlled substance. For example, according to DEA data, 52 facilities received procurement quotas for methadone in 2010. The quota levels ranged between 1 gram at the low end and 9,000,000 grams at the highest levels. Sometimes an individual company may be engaged in both bulk manufacturing and procurement activities for the same controlled substance. In this case, the company will apply for both a bulk manufacturing quota and a procurement quota. DEA officials said they use the same process and data sources, as described above, to determine appropriate levels for procurement quotas as for bulk manufacturing quotas. Officials said that DEA does not always set a company’s bulk manufacturing quota or procurement quota at the level the company requested. For example, if a registrant is suspected of unlawfully diverting controlled substances, DEA will take this factor into consideration when determining whether to grant or deny the quota request. In addition, DEA may set the quota lower than requested if a company has set its quota request based on projected sales figures, which can inflate the quantity of quota requested, rather than on actual sales figures. DEA officials said that the agency uses actual sales and inventory figures in their evaluation of bulk manufacturing or procurement quota applications, and they grant quotas in line with legitimate medical need. In the past, we have reported that DEA cited difficulties in determining an appropriate level for quotas to ensure that adequate quantities are available for legitimate medical need, as there are not direct measures available to establish legitimate medical need. DEA officials said that, based on the available prescription and sales data, there is no method to calculate which prescriptions are issued for a legitimate medical purpose by a practitioner acting in the usual course of professional practice and which are not. They noted that if DEA were to reduce a quota level by some percentage to account for estimated illegitimate prescriptions or to otherwise reduce a quota by an amount estimating how much of the substance is abused and misused, the action would only reduce the total amount of substance available for dispensing, and would not affect to whom or in what quantities the drugs are prescribed or dispensed. Therefore, DEA officials said that a reduction in the supply of a drug based upon estimated illegitimate prescriptions or abuse and misuse could result in a shortage of the substance for legitimate purposes, while not affecting illicit demand for the substance at all. As a result, officials said that the agency does not use the quota process as a tool to reduce demand or to help prevent abuse and misuse of prescription pain relievers. This report (1) describes recent national trends in prescription pain reliever abuse and misuse, (2) describes how federal agencies are educating prescribers about prescription pain reliever abuse and misuse, (3) assesses the extent to which federal agencies follow key practices for developing public education efforts about prescription pain reliever abuse and misuse, and (4) identifies educational efforts that use similar strategies and assess how agencies coordinate those efforts. To conduct this work we interviewed officials and reviewed documents, as described below for each objective. In addition, to gain context on the challenge of addressing the problem of abuse and misuse of prescription pain relievers while ensuring access to these pain relievers for legitimate medical use, we interviewed officials from the American Pain Foundation, and Purdue Pharma L.P., the manufacturer of the prescription pain reliever OxyContin. To describe recent national trends in prescription pain reliever abuse and misuse, we interviewed officials from the Centers for Disease Control and Prevention (CDC), DEA, FDA, the National Institutes of Health’s (NIH) National Institute on Drug Abuse (NIDA), the Office of National Drug Control Policy (ONDCP), and the Substance Abuse and Mental Health Services Administration (SAMHSA). We also conducted a literature review to identify relevant data sources and explanations for trends in prescription pain reliever abuse and misuse and analyzed data related to prescription pain reliever abuse and misuse from several data sources representative of the U.S. population aged 12 years and older. We included data in our review from the Drug Abuse Warning Network (DAWN), the National Vital Statistics System (NVSS), the Treatment Episode Data Set (TEDS), and the National Survey on Drug Use and Health (NSDUH). We selected these four data sources because they are the data sources that the agencies we interviewed use for monitoring trends in prescription pain reliever abuse and misuse, and because they are nationally representative.2009, the most recent years for which data from at least three data sources were available. We analyzed data for calendar years 2003- DAWN, a public health surveillance system operated by SAMHSA, provides annual national estimates of drug-related emergency department visits, including visits involving the abuse or misuse of prescription pain relievers.data DAWN collects from a national sample of general, nonfederal hospitals operating 24-hour emergency departments. For each sample hospital, a trained DAWN reporter conducts a retrospective review of a random sample of emergency department medical records to identify emergency department visits that involved recent drug use. The number of visits may not directly represent the number of individuals who have visited emergency departments in a given year, since some patients may have more than one visit in a year. Emergency department medical records may vary in specificity and detail. For example, prescription pain reliever abuse and misuse may be overreported if the medical record is unclear about whether an individual was abusing or misusing a prescription pain reliever, or taking it as prescribed while abusing or misusing another drug. Conversely, prescription pain reliever abuse and misuse may be underreported if the abuse or misuse of a regularly prescribed prescription pain reliever is not recognized or documented by the clinician. Because of changes to the DAWN methodology for 2004, we were not able to look at trends in DAWN data prior to that year. These national estimates are produced from NVSS, operated by CDC, receives and compiles data from all death certificates filed in the United States each year, including deaths involving prescription pain relievers. The causes of death section of the death certificates are completed by local medical examiners, coroners, or attending physicians, and the information is then coded by the states, or in some cases by CDC, and submitted to CDC, where it is further processed and coded, if necessary. NVSS data on overdoses includes both the mechanism of injury leading to death (such as poisoning by certain substances) and the manner or intent, including unintentional, suicide, homicide, undetermined, and legal intervention or war. Although CDC sometimes reports on all manners of overdose deaths combined, we focused only on unintentional deaths because it matches most closely with our definition of abuse and misuse. CDC officials said that some jurisdictions may undercount unintentional overdose deaths involving prescription pain relievers because of inconsistent use of toxicological lab tests, which may result in listing a death as a drug overdose death with no drugs specified on the death certificate, and other inconsistencies among jurisdictions, such as how they determine whether deaths are unintentional.inclusion in this report. NVSS data for 2009 were not published in time for TEDS, compiled by SAMHSA, gathers data on admissions to substance abuse treatment facilities nationwide, including data about the substances being abused by the person being admitted to treatment, such as prescription pain relievers. TEDS does not include all admissions to substance abuse treatment. It includes admissions at facilities that are licensed or certified by the state substance abuse agency to provide substance abuse treatment (or are administratively tracked for other reasons). In general, facilities reporting TEDS data are those that receive state alcohol or drug agency funds (including federal block grant funds) for the provision of alcohol or drug treatment services. Data about admissions are initially gathered by the facilities themselves and then collected by states and transmitted to a national data center. The number of admissions does not directly represent the number of individuals who have been admitted to treatment in a given year, because an individual admitted to treatment twice within a calendar year would be counted as two admissions. While treatment facilities included in TEDS account for a significant portion of treatment admissions nationwide, SAMHSA officials told us that no nationwide estimates are available of admissions to private, for-profit facilities or on the number of individuals being treated for substance abuse by physicians who have been approved to independently treat opioid addiction in an office-based setting. Therefore, SAMHSA officials told us that TEDS data underreport the number of individuals seeking treatment for prescription pain reliever abuse and misuse in the United States, especially among populations that have the resources to seek treatment from private facilities or physicians. In addition, the facilities and populations included in the data each state reports to TEDS are affected by state regulations and funding priorities. For example, some states report data from hospital- and prison-based treatment facilities, while others do not. Finally, some states may target certain populations, such as teenagers, with their limited funds for addiction treatment, meaning that these populations may be more heavily represented in the data from those states. NSDUH, an annual survey sponsored by SAMHSA, provides annual national estimates about the use of illicit drugs, alcohol, and tobacco in the civilian, noninstitutionalized population of the United States aged 12 years old or older, including estimates about the abuse and misuse of prescription pain relievers. These national estimates are produced from data NSDUH collects through a national household survey, which involves in-person interviews with sampled respondents. SAMHSA officials reported that NSDUH may underestimate the extent of drug use, including prescription pain reliever abuse and misuse, both due to underreporting by surveyed individuals and because the sample may not include some individuals at high risk for drug use. We have reported on these limitations in the past. While NSDUH incorporates strategies intended to increase respondents’ cooperation and willingness to report honestly and accurately, such as use of computer-assisted interviewing methods, it is not possible to know the extent of underreporting within NSDUH data. However, SAMHSA officials told us that when looking at trended data, underreporting is not a problem because it is assumed constant. To assess the reliability of these data for our purposes, we reviewed related documentation and conducted interviews with knowledgeable agency officials from CDC and SAMHSA to learn about data collection, quality control, and any limitations of these data sources. We also conducted electronic and manual data testing to ensure the quality of the data. We determined that all data we assessed were sufficiently reliable to provide overall trends for the purposes of our review. To describe how federal agencies are educating prescribers about prescription pain reliever abuse and misuse, we reviewed the 2010 National Drug Control Strategy and interviewed officials involved with federal prevention efforts to identify strategies used to educate prescribers during fiscal year 2011. We then interviewed officials from FDA, NIH, and SAMHSA and reviewed agency websites and documents to describe educational strategies used by these agencies. Because they are involved in federal prevention efforts, we also interviewed officials from DEA, the Health Resources and Services Administration (HRSA), ONDCP, and the American Medical Association about gaps in current prescriber education efforts and efforts to fill these gaps through mandatory prescriber education. We excluded agencies that support their own health care systems, such as the Bureau of Prisons, Department of Defense, Indian Health Service, and Department of Veterans Affairs, from the scope of our review as they serve special populations, rather than the general public. We also excluded educational efforts related to drug abuse treatment, including education about the use of the prescription pain relievers methadone or buprenorphine for use in the treatment of opioid addiction. To assess the extent to which federal agencies follow key practices for developing public education efforts about prescription pain reliever abuse and misuse, we reviewed the 2010 National Drug Control Strategy and interviewed officials involved with federal prevention efforts to identify efforts to educate the general public during fiscal year 2011. We then interviewed officials from DEA, FDA, NIH, ONDCP, and SAMHSA and reviewed agency websites and documents to gather evidence about how agencies developed public education efforts and then compared the development of these educational efforts against key practices for developing consumer education efforts from our prior work, Digital Television Transition: Increased Federal Planning and Risk Management Could Further Facilitate the DTV Transition (see table 3). We also consulted the Department of Health and Human Services (HHS) publication, Making Health Communications Programs Work, for additional information about best practices for developing public education initiatives.Information and Education to gain information about best practices for public health education. We identified nine efforts in fiscal year 2011 to educate the general public about prescription pain reliever abuse and misuse (see table 4). To determine recent trends in prescription pain reliever abuse and misuse, we analyzed trends from 2003 to 2009 in four key measures used to monitor prescription pain reliever abuse and misuse. These measures include emergency department visits (see table 5), admissions to substance abuse treatment facilities (see table 6), and unintentional overdose deaths (see table 7) involving prescription pain relievers, as well as the number of individuals who reported abusing or misusing prescription pain relievers in the past year (see table 8). In addition to the contact named above, Thomas Conahan (Assistant Director), Katherine L. Amoroso, Emily Binek, George Bogart, Cathleen Hamann, Regina Lohr, and Leslie Powell made key contributions to this report. Medicare Part D: Instances of Questionable Access to Prescription Drugs. GAO-11-699. Washington, D.C.: September 6, 2011. Prescription Drug Control: DEA Has Enhanced Efforts to Combat Diversion, but Could Better Assess and Report Program Results. GAO-11-744. Washington, D.C.: August 26, 2011. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-957. Washington, D.C.: September 9, 2009. Methadone-Associated Overdose Deaths: Factors Contributing to Increased Deaths and Efforts to Prevent Them. GAO-09-341. Washington, D.C.: March 26, 2009. Anabolic Steroid Abuse: Federal Efforts to Prevent and Reduce Anabolic Steroid Abuse among Teenagers. GAO-08-15. Washington, D.C.: October 31, 2007. ONDCP Media Campaign: Contractor’s National Evaluation Did Not Find That the Youth Anti-Drug Media Campaign Was Effective in Reducing Youth Drug Use. GAO-06-818. Washington, D.C.: August 25, 2006. Internet Pharmacies: Some Pose Safety Risks for Consumers. GAO-04-820. Washington, D.C.: June 17, 2004. Prescription Drugs: OxyContin Abuse and Diversion and Efforts to Address the Problem. GAO-04-110. Washington, D.C.: December 23, 2003. Prescription Drugs: State Monitoring Programs Provide Useful Tool to Reduce Diversion. GAO-02-634. Washington, D.C.: May 17, 2002.
The Centers for Disease Control and Prevention has declared that the United States is in the midst of an epidemic of prescription drug overdose deaths, with deaths associated with prescription pain relievers of particular concern. To address this issue, federal agencies are raising awareness by educating prescribers and the general public. In response to your request, GAO (1) described recent national trends in prescription pain reliever abuse and misuse, (2) described how federal agencies are educating prescribers, (3) assessed the extent to which federal agencies follow key practices for developing public education efforts, and (4) identified educational efforts that use similar strategies and assessed how agencies coordinate those efforts. GAO interviewed officials and reviewed documents and websites from seven agencies involved in federal drug control efforts and analyzed the most recent data from several data sources related to prescription pain reliever abuse and misuse. GAO also assessed the development of public education efforts and federal coordination efforts against key practices from prior GAO work. Key measures of prescription pain reliever abuse and misuse increased from 2003 to 2009. The largest increases were in measures of adverse health consequences such as emergency department visits, substance abuse treatment admissions, and unintentional overdose deaths, though increases were not consistent across all measures. Federal officials suggested that increasing availability of prescription pain relievers and high-risk behaviors by those who abuse or misuse the drugs, such as combining prescription pain relievers with other drugs or alcohol, likely contributed to the rise in adverse health consequences, though data about the reasons for the increases are limited. The Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Substance Abuse and Mental Health Services Administration (SAMHSA) use a variety of strategies to educate prescribers about issues related to prescription pain reliever abuse and misuse, but officials told us that more education is needed. The strategies used include developing continuing medical education programs, requiring training and certification in order to prescribe certain drugs, and developing curriculum resources for future prescribers. The Office of National Drug Control Policy (ONDCP) is working to develop a legislative proposal to require education for prescribers registering with the Drug Enforcement Administration (DEA) to prescribe controlled substances. Officials from some agencies said such a requirement would ensure all prescribers were starting from the same baseline of knowledge. In their efforts to educate the public about prescription pain reliever abuse and misuse, DEA, FDA, NIH, ONDCP, and SAMHSA used almost all of the key practices for developing their consumer education efforts. Agencies varied in how they used the key practices when developing these efforts, which varied in size, scope, and duration. All agencies established metrics to monitor the implementation and functional elements of their educational efforts, but only two agencies have established or are planning to establish metrics to assess the impact of their efforts on audiences’ knowledge, attitudes, and behavior. Without outcome evaluations, federal agencies have limited knowledge of how effective their efforts are in achieving their goals—in this case, reducing prescription pain reliever abuse and misuse. Among federal initiatives to educate prescribers and the public about prescription pain reliever abuse and misuse, GAO found several instances of agencies engaging in similar efforts, directed at similar target audiences and using similar mediums. Officials said that these similarities in public education efforts are beneficial in addressing prescription drug abuse and misuse because having multiple, reinforcing messages about the same subject is valuable in public health communications and because federal agencies provide slightly different perspectives on the issues surrounding prescription drug abuse and misuse. Likewise, the prescriber education programs GAO identified, though similar, are different in content and focus. Though these similar programs have the potential to be duplicative if not effectively coordinated, federal agencies have recently begun to coordinate their educational efforts. Nevertheless, federal agencies have missed opportunities to share lessons learned and pool resources among similar education efforts. GAO recommends that the Director of ONDCP establish outcome metrics and implement a plan to evaluate proposed educational efforts, and ensure that agencies share lessons learned among similar efforts. ONDCP did not explicitly agree or disagree with GAO’s recommendations, but noted that it will continue to work for improved coordination of educational efforts and evaluation of outcomes.
According to one study, in 2011, 54 percent of wage and salary workers aged 21-64 worked for an employer that sponsored a pension plan, such as a defined benefit (DB) plan or a DC plan, but only about 45 percent of wage and salary workers aged 21-64 actually participated in the plan. DB plans provide periodic benefits in retirement that are generally based on employees’ salaries and years of service. Employers may also choose to sponsor DC plans, under which both employers and employees can make contributions to the plan. Distributions in retirement are, in turn, based on contributions and investment returns in these accounts. Private sector employer-sponsored DB and DC plans are generally subject to the Employee Retirement Income Security Act of 1974 (ERISA), as amended, which establishes standards for private sector pension plans and sets forth protections for participants in these plans. The Department of Labor’s Employee Benefits Security Administration (EBSA) generally administers and enforces the Title I provisions of ERISA. These employer-sponsored plans must also meet certain requirements in the Internal Revenue Code (IRC), which are enforced by the Internal Revenue Service (IRS). Individuals can also save for retirement through IRAs, which allow individuals to make contributions for retirement regardless of whether The IRS has primary they are covered by an employer-sponsored plan.responsibility for ensuring that IRAs meet IRC requirements necessary to qualify for preferential tax treatment. Employers are continuing to shift away from sponsoring DB plans toward sponsoring DC plans. Data from the Department of Labor show that over the past few decades, DC plans have become the predominant plan type offered by private sector employers. Indeed, in 2010, over 90 percent of all employer-sponsored plans were DC plans. IRAs have also grown in importance in recent years and are a key retirement savings vehicle for many individuals. According to data from the Investment Company Institute, in the first quarter of 2013, IRA assets represented a larger portion of total U.S. retirement assets than 401(k) plans, the main type of DC plan. Specifically, IRA assets totaled almost $5.7 trillion, which represents about 27 percent of U.S retirement assets. In comparison, 401(k) assets accounted for about $3.8 trillion, or 18 percent, of U.S. retirement assets. Rollovers from 401(k) plans and other employer- sponsored plans are the predominant source of contributions to IRAs. Approximately 95 percent of money contributed to traditional IRAs in 2008 was attributable to rollovers, primarily from employer-sponsored plans.The greater reliance on DC plans and IRAs in the current retirement landscape indicates that the tax incentives are increasingly relevant for promoting retirement saving. Tax preferences for pension plans are structured to encourage individuals to save for retirement. Contributions to DC plans that fall within certain limits and investment earnings on assets are not taxed until distributions are paid to participants. Further, there are tax incentives for contributions to IRAs. There are two main types of IRAs: traditional and Roth. A traditional IRA allows individuals to make tax deductible contributions to their accounts, and distributions are generally subject to income tax. For traditional IRAs, deductions for contributions are subject to limits based on income and pension coverage. Distributions made prior to age 59 ½, other than under specific exceptions, are generally subject to additional 10 percent tax. A Roth IRA allows individuals to make after-tax contributions to their accounts, and generally distributions after age 59 ½ for accounts at least 5 years old are not subject to income tax. For Roth IRAs, distributions prior to age 59 ½ are taxable on portions attributable to earnings on contributions with an additional 10 percent tax on distributions other than for specified purposes. To further encourage low- and middle-income individuals and families to save for retirement, the Economic Growth and Tax Relief Reconciliation Act of 2001 authorized a nonrefundable tax credit (the Saver’s Credit) of up to $1,000 against federal income tax. Eligibility is based on a worker’s adjusted gross income (AGI) and contributions made to qualified pension plans and IRAs. The credit rate phases out as AGI increases, and the rate is applied to qualified contributions up to $2,000 for individuals and $4,000 for households. The total Saver’s Credit amount is equal to the amount of contributions multiplied by the credit rate (see table 1). Because the credit is nonrefundable, the ability to receive the full amount of credit is limited not only by income eligibility, but also by whether the taxpayer’s tax liability is large enough. In fiscal year 2012, the Saver’s Credit cost the federal government about $1.1 billion in revenue forgone, and revenue losses are estimated to amount to $6.2 billion for fiscal years 2013-2017, according to the Department of the Treasury. To expand retirement savings incentives for working families, the President’s fiscal year 2010 and 2011 budgets proposed modifying the Saver’s Credit. These modifications included: (1) making the credit refundable, (2) eliminating phase-out of the credit rate so that all eligible households would receive a 50 percent credit rate, and (3) extending the AGI limit to $85,000 for married couples filing their federal income taxes jointly. The estimated cost for this expansion of the Saver’s Credit was $323 million for fiscal year 2011 and $29.8 billion for fiscal years 2011- 2020, according to the President’s 2011 budget proposal.Committee on Taxation estimated that the Administration’s 2011 proposal The Joint would have cost about $27.5 billion for the same period. These modifications were not adopted and were not included in the President’s budget proposals for fiscal years 2012 through 2014. Bills introduced during the current and immediately preceding legislative sessions include provisions for similar modifications to the Saver’s Credit, including making the credit refundable and increasing the AGI range for taxpayers eligible for the 50 percent credit rate. Automatic enrollment has been advocated as a way to encourage greater participation in 401(k) plans. Our past work has shown that automatic enrollment policies considerably increased 401(k) participation rates for plans that adopted them. Automatic IRAs would similarly extend the benefits of payroll-deduction savings and automatic enrollment. Legislative proposals to establish automatic IRAs have been introduced in both the House of Representatives and the Senate in recent legislative sessions.through 2014 included a proposal to establish automatic IRAs. Additionally, the President’s budgets for fiscal years 2010 Under the 2012 automatic IRA legislative proposal, certain employers who do not maintain a qualified pension plan would be required to make available an automatic IRA arrangement to eligible employees. Typically, under this proposal, 3 percent of the employee’s salary would be automatically contributed to an IRA through payroll deduction, unless the employee elects to terminate his or her participation. Investment options for automatic IRAs would be limited to certain types of funds, such as principal preservation and target-date or life cycle funds, and target-date funds would be the default investment option. If the employer elects, employees would also have the option of using their contributions to purchase a retirement bond, which would provide a low-cost investment option where small account balances are pooled together until they are large enough to be profitable in the private market. This proposal also includes a provision for a tax credit for small employers to recoup start-up costs of establishing and maintaining automatic IRAs, such as setting up an automatic payroll deduction for employees. Automatic IRAs would also introduce costs to the federal government through the loss of federal income tax revenue. According to the President’s 2014 budget proposal, losses for automatic IRAs are estimated to amount to $1.1 billion for fiscal year 2015 and $17.6 billion for fiscal years 2014 through 2023, if the proposal were to be enacted. We found that households that do not save for retirement had lower AGI than those that do, regardless of age group or tax filing status (see fig. 1). Based on our analysis of the SCF data, we estimate that in 2010 approximately 43 percent of households working in the private sector did We also estimate that the median AGI of all not have a DC plan or IRA.households that did not have a DC plan or IRA was $32,000, compared to $75,000 for those that did—a difference of about 43 percent. These trends hold across age groups and tax filing statuses in 2010. In addition, we estimate that 56 percent of single households did not have a DC plan or IRA, while 36 percent of married households did not and married household AGI was consistently higher than single household AGI. In addition, households without DC plans or IRAs have lower median marginal tax rates than households with DC plans or IRAs. A marginal tax rate is the rate of tax paid on the next dollar of income that a taxpayer earns. Generally households with higher earnings have higher marginal tax rates, and tax incentives are worth more to these households. Based on our analysis of the 2010 SCF data, we estimate the median marginal tax rate for households with DC plans or IRAs to be 25 percent and 15 percent for households without such savings vehicles. Across age groups and tax filing statuses, more households with higher median marginal tax rates took advantage of tax incentives for retirement saving than households with lower median marginal tax rates (see fig. 2). For example, married households age 45 to 54 that did not have a DC plan or IRA had a median marginal tax rate of 15 percent, and those that did save had a median marginal tax rate of 25 percent. Single households age 45 to 54 that did not have a DC plan or IRA had a median marginal tax rate of 15 percent, compared to 22 percent for those that did save. As mentioned previously, DB plans typically provide a monthly benefit once an individual reaches retirement age and could be a source of retirement income for households without DC plans or IRAs. However, the prevalence of DB plans has declined over the past few decades. Consistent with overall trends, we found that few households without DC plans or IRAs have a DB plan (see fig. 3). Only 15 percent of married households and 11 percent of single households without a DC plan or IRA had a DB plan. Although most households without a DC plan or IRA also do not have a DB plan, the first earnings quartile is worse off. For example, an estimated 5 percent of married households in the first earnings quartile had a DB plan, compared to an estimated 27 percent and 31 percent of married households in the third and fourth earnings The shift away from DB plans and the limited quartiles, respectively.number of households that have them highlights the importance of other retirement savings vehicles such as DC plans or IRAs. We also analyzed the net worth—the difference between gross assets and liabilities—of single and married households without DC plans or IRAs by income quartile. Our evidence suggests that in addition to some households not having a DC plan or IRA, these households also may have limited additional assets to draw upon in their retirement years. For example, single households in the first income quartile without a DB plan had an estimated average net worth of $17,164. Married households in the first income quartile without a DB plan had an estimated average net worth of $70,485. The Saver’s Credit could increase retirement income for a sizeable portion of the population, especially if modifications, such as making it refundable and expanding eligibility, are included in its design. To analyze the effect of the credit on retirement income, we modeled two different Saver’s Credit scenarios: the existing credit and a modified refundable credit. (See “Saver’s Credit Modeling Scenarios” text box and app. II for detailed descriptions of these scenarios and the underlying assumptions). Specifically, we projected the effects of the two different Saver’s Credits for a cohort of individuals born in 1995 (see fig. 4). We assumed that all eligible taxpayers claimed the credit, and found that under the existing Saver’s Credit, just over a third of all individuals would be projected to receive the credit at some point during their working years. In contrast, we projected that about half of all individuals would receive the credit at some point during their working years if it were refundable. According to experts, the nonrefundable nature of the existing credit may limit its utility for low-income households. Specifically, many low-income households do not receive the full amount of the credit because their tax liability is not high enough. Our estimates show that changing the current design of the credit to make it refundable could substantially increase the percentage of individuals who would receive the credit. However, such modifications to the credit would also result in increased costs to the federal government through the loss of tax revenue and increased outlays. Under both scenarios, the lowest two earnings quartiles have the largest percentage of individuals who could receive the credit. For example, under the existing Saver’s Credit, about 45 percent of individuals in each of the lowest two earnings quartiles could receive it. Alternatively, a refundable Saver’s Credit could result in more middle-income workers receiving the credit, with an estimated 60 percent of individuals in the second earnings quartile potentially receiving it. Because we present our results across earnings quartiles that were based on lifetime earnings, these results reflect individuals who could have received the credit while they were lower earners but subsequently became higher earners. Households could experience larger retirement annuity increases from a refundable Saver’s Credit than the existing credit, according to our projections (see fig. 5). For the purposes of this report, household retirement annuities include all annual retirement income received from DC or DB plans and represent income that the household receives in retirement. DB plans typically provide benefits in the form of periodic payments which can be summed together into one annual payment. DC account balances represent a worker’s DC or IRA savings from past jobs, and we assumed that at retirement, workers used their entire DC account balance to purchase an annuity. We estimate that the median increase in households’ retirement annuity under the existing credit could be $103 per year. In contrast, under a refundable Saver’s Credit, the median increase could be $655 per year, according to our simulations. For both Saver’s Credit scenarios, low- and middle-earning households could For households in the lowest receive the largest benefit from the credit.earnings quartile, we projected that the median increase in annuity under the existing Saver’s Credit would be $155 per year, while the median increase in annuity under a refundable credit would be $876 per year. In addition, about 21 percent of households in the lowest earnings quartile could see an additional $1,000 to $1,999 per year in their retirement annuities under a refundable credit. Under the existing credit, only 6 percent of households could potentially receive an increase in annuity of this size. Our estimates of the percent change in median household retirement annuities show similar trends. The refundable Saver’s Credit results in a larger percent increase in the median household retirement annuity at all earnings levels. In both scenarios, the projected percent increase in the median household retirement annuity for the lowest earnings quartile was larger than the increase for other earnings quartiles. According to our projections, the percent increase in the median retirement annuity for low- earning households would be 4 percent under the existing Saver’s Credit and 15 percent under a refundable credit. Automatic IRAs provide a new opportunity to increase the number of households saving for retirement at all earnings levels. We project that 7 percent of all households had no retirement annuities from DB or DC plans but could receive annuity income from automatic IRAs. (See “Automatic IRA and Saver’s Credit Modeling Scenarios” text box and app. II for detailed descriptions of the scenarios we modeled and their underlying assumptions.) Based on our projections, more households in the lowest earnings quartile would benefit than in any other earnings quartile.or DC plan in the lowest earnings quartile could receive annuity income from automatic IRAs, according to our analysis. We analyzed H.R. 4049, the Automatic IRA Act of 2012 because it was the most recent legislative proposal at the time we conducted our analysis. On May 16, 2013, H.R. 2035, the Automatic IRA Act of 2013 was introduced before the House of Representatives. The differences between H.R. 2035 and H.R. 4049 are minimal and we determined that they would not have a significant effect on our simulation results. Under automatic IRAs, 36 percent of households could see modest increases in their retirement annuities, according to our projections. These households include workers who have not had access to a DB or DC plan as well as workers who had access to DB or DC plans at some jobs but not at others. Specifically, we projected that the median dollar increase in a household’s annuity was $1,046 (see fig. 6).projections show that those in the lowest earnings quartile saw the smallest median dollar increase in the household’s annuity ($479), the percent change in the median annuity was 66 percent. In addition, we estimate that 30 percent of households in this quartile saw an increase in their annuity of $1,000 or more. In comparison, the percent change in the median retirement annuity for households in the second earnings quartile was 16 percent and the median dollar increase in the household’s retirement annuity was $1,043. Almost 75 percent of households could experience an increase in their retirement annuity if the Saver’s Credit were made refundable at the same time automatic IRAs were implemented (see fig. 7). Further, according to our projections, making the Saver’s Credit refundable in addition to implementing automatic IRAs could double the number of households experiencing an increase in retirement income. Under this scenario, many more households could benefit than under automatic IRAs alone. Retirement annuities could increase if the household participated in automatic IRAs, was eligible to receive the refundable Saver’s Credit but not the existing Saver’s Credit, received a larger credit under the refundable Saver’s Credit, or some combination of these events. Further, under the proposals we used for our modeling, the refundable Saver’s Credit would be available for certain households with AGI of up to $85,000 a year, resulting in all lower and many middle- income households seeing an increase in their retirement savings. Households in the two lowest earnings quartiles could be most likely to benefit, with about 80 percent of households seeing an increase in their retirement annuities, according to our projections. Our projections show that households in the lowest earnings quartile could gain the most from the combination of automatic IRAs and a refundable Saver’s Credit. For these households, the percent increase in the median retirement annuity could be 21 percent, compared to a potential 2 percent increase for households in the highest quartile (see fig. 8). Furthermore, 48 percent of households in the lowest earnings quartile could see an increase in their retirement annuities of $1,000 or more. In general, under our projections, lower-earning households were more reliant on Social Security for income in retirement than higher-earning quartiles (see fig. 9). This may be because higher-earning households have the ability to save more for retirement, generating a larger retirement savings account at retirement. Further, Social Security benefits are progressive and replace a larger portion of lifetime earnings for people with low earnings than for people with high earnings. Thus, higher- earning households may receive a greater share of retirement income from annuitizing their retirement savings than from their Social Security benefits. The combined effects of automatic IRAs and a refundable Saver’s Credit could decrease lower- and middle-income household’s reliance on Social Security in retirement. Under our projections of automatic IRAs, the two lowest earnings quartiles relied on Social Security for 80 and 62 percent of their total household income, respectively. However, the percent of income derived from a household’s retirement annuity increased when we added a refundable Saver’s Credit. The two lowest earnings quartiles relied on retirement annuities for 31 and 46 percent, respectively, of their household’s total income. Concerns about retirement security have grown as private sector employers have shifted to predominantly sponsoring DC plans. First, despite existing tax incentives aimed at fostering plan formation and coverage, less than half of workers aged 21-64 were covered by an employer-sponsored plan in 2011 and the participation rate has barely changed over the last several decades. Second, despite significant tax incentives aimed at increasing retirement savings, many households enter retirement without adequate financial resources. Recent proposals, such as those that would establish the automatic IRA, could represent a significant step toward increasing participation to over 50 percent within the current framework of a voluntary system. Further, redesigning the Saver’s Credit could improve the retirement income and savings imbalance between lower and higher earners by giving lower- and middle-earning households an additional incentive to accumulate more in their retirement accounts or to start saving for retirement. By itself, increasing coverage through automatic IRAs will likely result in, at best, moderate increases in retirement income. However, even these increases could bolster the financial prospects of many future retirees. These options do, of course, pose important trade-offs for individuals, employers, and the government that will need to be carefully weighed. Individuals, especially those in lower-income households, will still have to make difficult choices between spending now and saving for later. In addition, the government would forego tax revenue in the near-term because contributions to automatic IRAs would be tax-deferred. Moreover, the expansion of the Saver’s Credit would result in both a loss of revenue and increased federal spending. Such costs will have to be considered along with the proposals’ potential effects on labor force participation and dependence on other government programs, such as Social Security, in retirement. We provided a draft of this report to the Department of Labor and the Department of the Treasury for review and comment. While neither agency provided official comments, each provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in Appendix III. To analyze the extent to which workers with lower earnings can benefit from tax incentives for retirement savings and automatic IRAs, we examined (1) the earnings and tax rates of households that do not have DC plans or IRAs, (2) the effects of the Saver’s Credit on retirement income, and (3) how automatic IRAs could affect retirement income, especially for low- and middle-income workers. This appendix and appendix II provide a detailed account of the information and methods we used to answer these objectives. Section 1 describes the key information sources we used. Section 2 describes the empirical methods we used to answer objective 1 and presents information on standard errors and confidence intervals for our estimates. To answer our objectives, we obtained information from a variety of sources including the Survey of Consumer Finances (SCF); the Policy Simulation Group’s (PSG) microsimulation models; relevant literature; interviews with a range of experts in the area of retirement security; presidential and legislative proposals to modify the Saver’s Credit and create automatic IRAs; and relevant federal laws and regulations. To answer the first objective, we used data from the 2010 SCF to analyze incomes and tax rates of households that did not take advantage of the existing tax incentives for retirement savings. The SCF is a triennial, nationally representative survey from the Board of Governors of the Federal Reserve. The 2010 SCF surveyed 6,482 households about their pensions, incomes, labor force participation, asset holdings and debts, use of financial services, and demographic information. The SCF is conducted using a dual-frame sample design. One part of the design is a standard, multistage area-probability design, while the second part is a special over-sample of relatively wealthy households. This is done in order to accurately capture financial information about the population at large as well as characteristics specific to the relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to make estimates from the survey data representative of households overall. In addition, the SCF excludes people included in the Forbes Magazine list of the 400 wealthiest people in the United States. Furthermore, the 2010 SCF dropped 10 observations from the public data set that had net worth at least equal to the minimum level needed to qualify for the Forbes list. For the purposes of this report, a household refers to the primary economic unit within a household, which the SCF refers to as a family. To estimate the age, marital status, net worth, tax filing status, and whether the household had a member with a DC or DB plan, we relied on variable definitions used for Federal Reserve publications using the SCF. To estimate AGI and tax rates, we used TAXSIM, an application provided by the National Bureau of Economic Research that estimates tax information for households using survey data. To prepare the data for TAXSIM, we relied on a program provided by the Federal Reserve. We followed SCF documentation and estimated the standard errors that incorporate both implicates and replicate weighting. The estimated populations we used for our analysis were derived with the first implicate. The SCF and other surveys that are based on self-reported data are subject to several other sources of nonsampling error, including the inability to get information about all sample cases; difficulties of definition; differences in the interpretation of questions; respondents’ inability or unwillingness to provide correct information; and errors made in collecting, recording, coding, and processing data. These nonsampling errors can influence the accuracy of information presented in the report, although the magnitude of their effect is not known. To answer objectives 2 and 3, we used PSG’s SSASIM, GEMINI, and PENSIM simulation models. GEMINI simulates Social Security benefits and taxes for large representative samples of people born in the same year. GEMINI simulates all types of Social Security benefits including retired workers’, spouses’, survivors’, and disability benefits. It can be used to model a variety of changes to Social Security. GEMINI uses inputs from SSASIM, which has been used in numerous prior GAO reports, and PENSIM, which was developed for the Department of Labor. GEMINI relies on SSASIM for economic and demographic projections and relies on PENSIM for simulated life histories of large representative Life samples of people born in the same year and their spouses.histories include educational attainment, labor force participation, earnings, job mobility, marriage, disability, childbirth, retirement, and death. Life histories are validated against data from the Survey of Income and Program Participation, the Current Population Survey, Modeling Income in the Near Term (MINT), and the Panel Study of Income Dynamics. Additionally, any projected statistics (such as life expectancy, employment patterns, and marital status at age 60) are, where possible, consistent with intermediate cost projections from the Social Security Administration’s Office of the Chief Actuary (OCACT). At their best, such models can provide only very rough estimates of future incomes. However, these estimates may be useful for comparing future incomes across alternative policy scenarios and over time. GEMINI can be operated as a free-standing model or it can operate as a SSASIM add-on. When operating as an add-on, GEMINI is started automatically by SSASIM for one of two purposes. GEMINI can enable the SSASIM macro model to operate in the Overlapping Cohorts (OLC) mode or it can enable the SSASIM micro model to operate in the Representative Cohort Sample (RCS) mode. The SSASIM OLC mode requests GEMINI to produce samples for each cohort born after 1934 in order to build up aggregate payroll tax revenues and Old-Age, Survivors, and Disability Insurance (OASDI) benefit expenditures for each calendar year, which are used by SSASIM to calculate standard trust fund financial statistics. In either mode, GEMINI operates with the same logic, but typically with smaller cohort sample sizes in OLC mode than in the RCS or stand-alone-model mode. PENSIM simulates the timing for each life event by using data from various longitudinal data sets to estimate a waiting-time model (often called a hazard function model) using standard survival analysis methods. PENSIM incorporates many such estimated waiting-time models into a single dynamic simulation model. This model can be used to simulate a synthetic sample of complete life histories. PENSIM employs continuous- time, discrete-event simulation techniques, such that life events do not have to occur at discrete intervals, such as annually on a person’s birthday. PENSIM also uses macro-demographic and macroeconomic variables generated by SSASIM. PENSIM imputes pension characteristics using a model estimated with 1996—1998 establishment data from the Bureau of Labor Statistics Employee Benefits Survey (now known as the National Compensation Survey). Pension offerings are calibrated to historical trends in pension offerings from 1975 to 2005, including plan mix, types of plans, and employer matching. Further, PENSIM incorporates data from the 1996– 1998 Employee Benefits Survey to impute access to and participation rates in DC plans in which the employer makes no contribution, which the Bureau of Labor Statistics does not report as pension plans in the National Compensation Survey. The inclusion of these “zero-matching” plans enhances PENSIM’s ability to accurately reflect the universe of pension plans offered by employers. The baseline PENSIM assumption, which we adopted in our analysis, is that 2005 pension offerings, including the imputed zero-matching plans, are projected forward in time. PENSIM also simulates federal income taxes. PSG has conducted validation checks of PENSIM’s simulated life histories against both historical life history statistics and other projections. Different life history statistics have been validated against data from the Survey of Income and Program Participation, the Current Population Survey, MINT, the Panel Study of Income Dynamics, and the Social Security Administration’s Trustees Report. PSG reports that PENSIM life histories have produced similar annual population, taxable earnings, and disability benefits for the years 2000 to 2080 as those produced by the Congressional Budget Office’s long-term Social Security model and as shown in the Social Security Administration’s 2004 Trustees Report. According to PSG, PENSIM generates simulated DC plan participation rates and account balances that are similar to those observed in a variety of data sets. For example, measures of central tendency in the simulated distribution of DC account balances among employed individuals are similar to those produced by an analysis of the Employee Benefit Research Institute-Investment Company Institute 401(k) database and of the 2004 SCF. We performed no independent validation checks of PENSIM’s life histories or pension characteristics. In 2006, the Employee Benefits Security Administration (EBSA) submitted PENSIM to a peer review by three economists. The economists’ overall reviews ranged from highly favorable to highly critical. While the economist who gave PENSIM a favorable review expressed a “high degree of confidence” in the model, the one who criticized it focused on PENSIM’s reduced form modeling. This means that the model is grounded in previously observed statistical relationships among individuals’ characteristics, circumstances, and behaviors, rather than on any underlying theory of the determinants of behaviors, such as the common economic theory that individuals make rational choices as their preferences dictate and thereby maximize their own welfare. The reduced form modeling approach is used in pension microsimulation models and the feasibility of using a nonreduced form approach to build such a model may be questionable given the current state of economic research. The third economist raised questions about specific modeling assumptions and possible overlooked indirect effects. We conducted a data reliability assessment of the PSG models and selected variables from the SCF by conducting electronic data tests for completeness and accuracy, reviewing documentation on the data set, and interviewing knowledgeable officials about how the data are collected and maintained and their appropriate uses. When we learned that particular fields were not sufficiently reliable, we did not use them in our analysis. For the purposes of our analysis, we found the variables that we ultimately reported on to be sufficiently reliable. We conducted an extensive literature review and interviewed a range of experts. To identify existing studies, we conducted searches of various databases, such as ECO, ArticleFirst, WorldCat, Social SciSearch, Harvard Business Review, EconLit, ProQuest, PolicyFile and CQ.com. From these sources, we reviewed article abstracts, when available, to determine which articles contained information germane to our report and reviewed those articles. In addition, we collected articles posted on the websites of organizations such as Brookings, the Heritage Foundation, and AARP. We performed these searches and identified articles from June 2012 through October 2012. We also interviewed experts. To ensure we obtained a balanced perspective, we interviewed experts with a range of perspectives and from different types of organizations including government, research organizations, advocacy groups, and the private sector. We also conducted interviews with several experts in government and the private sector on technical issues related to our analysis. Specifically, we interviewed agency officials at the departments of the Treasury and Labor; researchers from the Urban Institute, the Heritage Foundation, and the Economic Policy Institute; experts and advocates from the Pension Rights Center, Demos, Aspen Institute, American Society of Pension Professionals and Actuaries, and AARP; and private sector professionals from PAi, Prudential Financial, Inc., and Putnam Investments. We consulted with officials at the Social Security Administration (SSA) and an expert from the PSG on technical issues. To analyze the earnings and tax rates of households that do not take advantage of the tax incentives for retirement saving, we used the 2010 SCF. We limited our sample to households where the household head was under age 65 and either the respondent or spouse worked in the private sector. Our estimates for households only include retirement benefits and savings of the survey respondent and a spouse or partner and do not include retirement benefits or savings held by additional family members. As a result, our estimates may understate retirement assets held by a household. We estimated whether the household had a DB plan, and AGI and marginal tax rates for households with and without DC plans or IRAs. We also estimated net worth using variable definitions outlined by the Federal Reserve. We estimated AGI and marginal tax rates by inputting the SCF data into the National Bureau of Economic Research’s (NBER) TAXSIM Model, a microsimulation model of U.S. TAXSIM calculates estimated federal and state income tax systems.liabilities under U.S. federal and state income tax laws from actual tax returns that have been prepared for public use by the Statistics of Income Division of the IRS. The SCF is a probability sample based on random selection, so the 2010 SCF sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Tables 2 through 6 show the confidence intervals for our estimates of AGI, marginal tax rates, and retirement plan ownership. This appendix describes the modeling scenarios and assumptions for objectives 2 and 3. It also presents the results from our sensitivity analyses and describes cohort summary statistics resulting from our simulations. To analyze the effects of the Saver’s Credit and automatic IRAs on retirement income, we used the PSG models. We started with a 2 percent sample of a 1995 cohort, totaling 118,142 people at birth. We projected income in retirement when the cohort is age 67. Our simulations included some of the following key assumptions and features: Individuals who died before they retire, died before age 67, retire after age 67, immigrated into the cohort at an age older than 25, emigrated before age 67, or became permanently disabled before age 62 were omitted from the sample. We did not include individuals who became permanently disabled before age 62 because we could not account for missed career growth and opportunities to participate in employer- sponsored pension plans. Retirement occurred as early as age 62. Anyone who became disabled at age 62 or older was considered retired. Workers could be covered by DB plans. We relied on PENSIM’s defaults to determine DB plan coverage and benefit amounts. Rates of return were fixed. The annual nonstochastic nominal rate of return was 9.2 percent for stocks and 5.7 percent for government bonds. Using different rates of return would result in different effects on DC and automatic IRA account balances at retirement and, as a result, the size of the household’s retirement annuity. Also, because our projections did not stochastically model stock returns, assuming a rate of return on assets equal to the historical return on stocks did not capture the risks associated with stock returns. Further, the nominal rate of return for stocks is based in part on a long-term equity risk premium calculated in 2000. PENSIM assigned each individual one of four “lifetime asset allocation styles.” DC assets were invested according to the individual’s asset allocation style. These styles included (1) all assets are invested in a diversified-equity fund, (2) all assets are invested in a government bond fund, (3) 15 percent of assets are invested in a collection of individuals stocks and the remaining 85 percent are invested in an age-specific mixture of a diversified-equity fund and a government- bond fund, and (4) 15 percent of assets are invested in a collection of individual stocks and the remaining 85 percent are invested in a target-date fund. Account fees for investment funds were 75 basis points for target-date funds; 100 basis points for diversified-equity funds; 45 basis points for money-market funds, equity-index, and government bond funds; and 0 basis points for stable-value and guaranteed-return funds. We considered these account fees to be our “standard fees” scenario. When workers change jobs, they either roll over their DC account balances or cash them out. Whether workers roll over their DC account balances or cash them out depends on the relative size of their account balances. We relied on PENSIM’s defaults to determine whether workers rolled over their account balances. PENSIM does not allow for hardship withdrawals. Workers accumulated DC and IRA savings from past jobs in one rollover account, which continued to receive investment returns. At retirement, benefits were consolidated into one account. Workers used their entire DC account balance at retirement to purchase a single-life annuity at retirement that was not adjusted for inflation. DB benefits were also provided in the form of an annuity. DB and DC annuities were combined into one retirement annuity. Annuity prices were based on projected mortality rates for the 1995 birth cohort and on loading factors that ensured that the cost of providing annuities in PENSIM equaled the revenue generated by selling them at those prices. We assume that the annuity provider had no administrative or marketing costs, no costs in acquiring the capital it needs to hold in reserve, and earns no profits. Eligible individuals received Social Security benefits in retirement. According to 2012 projections of the Social Security Trustees for the next 75 years (2012-2087), revenues will not be adequate to pay full benefits as defined by the current benefit formula. Therefore, estimating future Social Security benefits should reflect that actuarial deficit and account for the fact that some combination of benefit reductions and revenue increases will be necessary to restore long- term solvency. Our tax-increase-only benchmark simulated “promised benefits,” or those benefits promised by the current benefit formula. We also developed an alternative benefit-reduction-only benchmark which simulated “funded benefits,” or those benefits for which currently scheduled revenues are projected to be sufficient (see Sensitivity Analysis below). Our tax-increase-only benchmark raised payroll taxes once and immediately by the amount of Social Security’s actuarial deficit as a percentage of payroll. It resulted in the smallest ultimate tax rate of those we considered and spread the tax burden most evenly across generations; this was the primary basis for our selection. The later that taxes are increased, the higher the ultimate tax rate needed to achieve solvency, and, in turn, the higher the tax burden on later taxpayers and lower on earlier taxpayers. Still, any policy scenario that achieves 75-year solvency only by increasing revenues would have the same effect on the adequacy of future benefits in that promised benefits would not be reduced. Nevertheless, alternative approaches to increasing revenues could have very different effects on individual equity. All estimates related to this benchmark were simulated using the SSASIM OLC mode. Starting in 2013, we increased the tax rate by 2.61 percent in order to achieve 75-year solvency. As reported in the Social Security Administration’s 2012 Trustees Report, immediately raising taxes by 2.61 percent would achieve solvency. In our policy scenarios, we varied other assumptions to see how these variations affect retirement income at age 67. We did not account for any behavioral responses that changes in policy may have created. In reality, some individuals may choose to contribute more to their pension plans or may choose to start saving in a pension plan or IRA in response to policy changes. Policy scenarios we analyzed include: No Saver’s Credit. In addition to the assumptions described above, this scenario assumed no Saver’s Credit’s was available. This scenario served as the baseline against which we compared the results for the existing and refundable Saver’s Credit scenarios. Existing Saver’s Credit. This scenario was drawn from the existing design of the Saver’s Credit as established by the Economic Growth and Tax Relief Reconciliation Act of 2001. Under the credit’s current design, taxpayers may receive a non-refundable credit of up to $1,000 ($2,000 if married filing jointly) for contributing to a pension plan or IRA. The credit amount is equal to the credit rate multiplied by the amount of qualified contribution (of up to $2,000 per individual). Taxpayers receive a credit rate of 50, 20, or 10 percent depending on AGI and filing status. Certain students, individuals under age 18, and those who are claimed as a dependent on another taxpayer’s return are not eligible to receive the credit. If a worker receives a pre- retirement distribution from a pension plan, any credit received in that year and the subsequent two years will be reduced by the amount of the distribution. We used information from the Internal Revenue Service to determine current AGI limits. 2013 AGI limits are: $59,000 for individuals with a filing status of married filing jointly, $44,250 for individuals with a filing status of head of household, and $29,500 for individuals with a filing status of single, married filing separately, or widow(er). Limits in subsequent years were indexed to inflation. PENSIM assumed the credit was deposited directly into the taxpayer’s DC account. This is not a requirement under law, and in reality the credit would be part of the household’s tax refund. We assumed a 100 percent utilization rate, meaning that all eligible taxpayers received the credit, to show the maximum potential accumulation for retirement income from the Saver’s Credit. This utilization rate also reflects PENSIM’s default values for 2012 and beyond. We did not assess behavioral responses related to the utilization of the Saver’s Credit. Refundable Saver’s Credit. This scenario had the same eligibility requirements and 2013 AGI limits as the existing Saver’s Credit scenario. Starting in 2014, the following changes were implemented, drawn from our understanding of modifications included in the President’s budget proposal of fiscal year 2011: (1) the credit became fully refundable, (2) all taxpayers received a 50 percent credit rate (there is no phase-out), and (3) AGI limits were increased. The 2014 AGI limits were: $85,000 for individuals with a filing status of married filing jointly, $63,750 for individuals with a filing status of head of household, and $42,500 for individuals with a filing status of single, married filing separately, or widow(er). Limits in subsequent years were indexed to inflation. Individuals could receive a refundable credit of up to $1,000 if filing singly or $2,000 if filing jointly. PENSIM assumed that the credit is deposited directly into the taxpayer’s DC account. This change was not proposed in the President’s budget. We assumed a 100 percent utilization rate, meaning that all eligible taxpayers claimed the credit, to show the maximum potential accumulation for retirement income from the Saver’s Credit. We did not assess behavioral responses related to the utilization of the Saver’s Credit. Automatic IRAs. This scenario is drawn from our understanding of H.R. 4049 The Automatic IRA Act of 2012. For our scenario, we assumed that private sector employers began offering automatic IRAs in 2014. Employers who employed more than 10 employees and did not offer a DB or DC plan were required to offer automatic IRAs to employee ages 18 and over after they have been employed for 90 days. Contributions were made to traditional IRAs. If the employee did not select a contribution rate or investment fund, 3 percent of the employee’s salary was contributed to a target-date fund. The aggregate participation rate is centered around a target participation rate of 69 percent in the year 2035, when the cohort was age 40 and in the midst of its prime working years. We selected this rate by estimating a hypothetical average participation rate using parameters from economic literature and SCF data on households that did not currently participate in a DC or IRA plan. Our model considered the effects of automatic enrollment in 401(k) plans, the effects of an employer match (or a lack there-of), income, and age. Individuals may choose to terminate their participation in the automatic IRA after being enrolled in the plan. The Saver’s Credit provisions matched the Existing Saver’s Credit scenario described above. We assumed that the utilization rate for the credit was 100 percent to model the full savings potential of the credit. Automatic IRAs and a refundable Saver’s Credit. In addition to modeling automatic IRAs as described above, we modeled the Saver’s Credit with the same provisions as in the Refundable Saver’s Credit scenario, also described above. The refundable Saver’s Credit was fully refundable, there was no credit rate phase-out, and AGI limits were extended starting in 2014. Individuals could receive a refundable credit of up to $1,000 if filing singly or $2,000 if filing jointly. We assumed that the utilization rate for the refundable Saver’s Credit was 100 percent. Figures 10 through 13 show the results of our modeling scenarios presenting means instead of medians. In general, for dollar changes, the means were higher than the medians. In addition, we modeled alternative scenarios to examine how sensitive our results were to our assumptions. Specifically, we ran the PSG models using the following alternative assumptions: Alternative assumptions for investment account fees. Should investment account fees be higher than those in our standard fees assumption, we established a “high fees” assumption. Account fees for target-date funds were 200 basis points; account fees for diversified-equity funds were 225 basis points; account fees for money-market funds, equity-index, and government bond funds remained at 45 basis points; and account fees for stable-value and guaranteed-return funds remained at 0 basis points. Alternative assumptions for Social Security benefits. Our benefit- reduction-only benchmark simulated “funded benefits,” or those benefits for which currently scheduled revenues are projected to be sufficient. Under this scenario, the benefit reduction did not begin until 2018 to give workers and Social Security beneficiaries time to prepare for the reduction. In 2018, benefits were reduced by 18.2 percent for new and existing beneficiaries. SSA actuaries scored this benchmark and determined it would achieve 75-year solvency. Alternative assumptions for Saver’s Credit utilization rate. Research on Saver’s Credit utilization rate for the first couple of years it was available indicates that between 60-67 percent of eligible taxpayers claimed the credit. We modeled a utilization rate of 60 percent to reflect the lower bound of this range. This scenario demonstrates the effects of the Saver’s Credit if utilization is as low as the initial rate at which taxpayers claimed the credit. Alternative assumptions for automatic IRA participation rate. We modeled an aggregate automatic IRA participation rate of 48 percent to demonstrate the effects of automatic IRAs should overall participation be lower than expected. This alternative participation rate is similar to one study’s estimate for the participation rate for new employees in retirement plans where participation is voluntary. The results for all of our modeling scenarios, including those using our alternative assumptions, are presented in tables 7 through 11 below. Lifetime summary statistics of the simulated 1995 cohort’s workforce and demographic variables give some insight into the PSG model’s projections of income in retirement in our report (see tables 12 and 13). By restricting the sample to retirees who do not immigrate into the cohort after age 25, do not emigrate or die before age 67, and do not become permanently disabled before age 62, we reduce the full sample of 118,142 individuals to a sample of 60,813 individuals. In addition to the contact named above, Michael Collins, Assistant Director; Andrea Dawson, Jennifer Gregory, and Amrita Sen made key contributions. Also contributing to this report were Alicia Atkinson, Benjamin Bolitzer, Alicia Cackley, Colleen Candrl, David Chrisinger, Laura Henry, Sharon Hermes, Gene Kuehneman, Kathy Leslie, Thomas McCool, Mimi Nguyen, Marylynn Sergent, Kenneth Stockbridge, Nyree Ryder Tee, Roger Thomas, Frank Todisco, and Walter Vance.
Participants in DC plans and IRAs may receive tax incentives for their contributions and lower-earning households may qualify for the Saver's Credit, an additional tax incentive for their contributions. However, less than half of the workforce participates in an employer-sponsored plan and upper-income workers have been more likely to take advantage of associated tax incentives. In recent years, proposals have been put forth to modify the Saver's Credit and create automatic IRAs, under which employers who do not sponsor a plan would generally be required to offer their employees the opportunity to save in an IRA through payroll deduction. These proposals would have fiscal impacts for the federal government. GAO was asked to review tax incentives for contributions to DC plans and automatic IRAs. GAO examined (1) the earnings and tax rates of households that do not have DC plans or IRAs, (2) the effects of the Saver's Credit on retirement income, and (3) the effects of automatic IRAs on retirement income, especially for low- and middle-income workers. GAO examined the characteristics of households that do not take advantage of these tax incentives using data from the 2010 Survey of Consumer Finances, simulated the effects of the Saver's Credit and automatic IRAs, and reviewed related proposals. GAO is making no recommendations. GAO received technical comments on a draft of this report from the Department of Labor and the Department of the Treasury, and incorporated them as appropriate. Households without employer-sponsored defined contribution (DC) pension plans or individual retirement accounts (IRA) had lower incomes and tax rates than households with those plans, and are also likely to have limited additional resources to draw upon in retirement, according to GAO estimates. The median adjusted gross income for households without DC plans or IRAs was $32,000, compared to $75,000 for those that did have them. The median marginal tax rate for households without DC plans or IRAs was 15 percent, compared to 25 percent for households with those savings vehicles. A defined benefit (DB) pension plan could provide a monthly benefit during retirement years for those without a DC plan or IRA; however, in 2010 only 15 percent of married households and 11 percent of single households without a DC plan or IRA had a DB plan. The existing Saver's Credit tax incentive could result in small increases in a household's retirement annuity--that is, the household's annual retirement income received from DC or DB plans. GAO estimates that, on account of this credit, the median annuity increase for households in the lowest earnings quartile ($929-34,377) would be $155. If, however, the Saver's Credit was refundable (i.e., could generate a tax refund in excess of tax paid), it could result in larger increases in households' annuities across all earnings levels, and the median increase for households in the lowest earnings quartile would be $876 per year. Implementing automatic IRAs, unless waived by participants, could expand retirement coverage and modestly increase retirement annuities for households at all earnings levels. Specifically, 7 percent of all households could receive retirement annuities from automatic IRAs even though these households had no DB or DC plans, according to GAO's projections. Workers with DB or DC plans could also benefit from automatic IRAs at certain points in their lifetime if their jobs do not offer such plans. Moreover, low-income workers could see a sizable increase in their annuities under automatic IRAs and the existing Saver's Credit--the projected median dollar increase for these households' annual retirement annuity would be $479.
Recognizing the potential value of IT for public and private health systems, the federal government has, for several years, been working to promote the nationwide use of health IT. In April 2004, President Bush called for widespread adoption of interoperable electronic health records within 10 years and issued an executive order that established the position of the National Coordinator for Health IT within HHS. The National Coordinator’s responsibilities include developing, maintaining, and directing the implementation of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private sectors. According to the strategic plan, the National Coordinator is to lead efforts to build a national health IT infrastructure that is intended to, among other things, ensure that patients’ individually identifiable health information is secure, protected, and available to the patient to be used for medical and nonmedical purposes, as directed by the patient and as appropriate. In January 2007, we reported on the steps that HHS was taking to ensure the protection of personal health information exchanged within a nationwide network and on the challenges facing health information exchange organizations in protecting electronic personal health information. We reported that although HHS and the Office of the National Coordinator had initiated actions to identify solutions for protecting electronic personal health information, the department was in the early stages of its efforts and had not yet defined an overall privacy approach. As described earlier, we made recommendations regarding the need for an overall privacy approach, which we reiterated in subsequent testimonies in February 2007, June 2007, and February 2008. In our report, we described applicable provisions of HIPAA and other federal laws that are intended to protect the privacy of certain health information, along with the HIPAA Privacy Rule and key principles that are reflected in the rule. Table 1 summarizes these principles. We also described in our report and testimonies challenges associated with protecting electronic health information that are faced by federal and state health information exchange organizations and health care providers. These challenges are summarized in table 2. We reported that HHS had undertaken several initiatives intended to address aspects of key principles and challenges for protecting the privacy of health information. For example, in 2005, the department awarded four health IT contracts that included requirements for developing solutions to comply with federal privacy requirements and identifying techniques and standards for securing health information. Since January 2007, HHS has undertaken various initiatives that are contributing to its development of an overall privacy approach, although more work remains. We recommended that this overall approach include (1) identifying milestones and the entity responsible for integrating the outcomes of its privacy-related initiatives, (2) ensuring that key privacy principles in HIPAA are fully addressed, and (3) addressing key challenges associated with the nationwide exchange of health information. In this regard, the department has fulfilled the first part of our recommendation, and it has taken important steps in addressing the two other parts. Nevertheless, these steps have fallen short of fully implementing our recommendation because they do not include a process for ensuring that all key privacy principles and challenges will be fully and adequately addressed. In the absence of such a process, HHS may not be effectively positioned to ensure that health IT initiatives achieve comprehensive privacy protection within a nationwide health information network. The department and its Office of the National Coordinator have continued taking steps intended to address key privacy principles and challenges through various health IT initiatives. Among other things, these initiatives have resulted in technical requirements, standards, and certification criteria related to the key privacy principles described in table 1. The following are examples of ways that the Office of the National Coordinator’s health IT initiatives relate to privacy principles reflected in HIPAA. As part of its efforts to advance health IT, the American Health Information Community defines “use cases,” which are descriptions of specific business processes and ways that systems should interact with users and with other systems to achieve specific goals. Among other things, several of the use cases include requirements and specifications that address aspects of the access, uses and disclosures, and amendments privacy principles. For example, the “consumer empowerment” use case describes at a high level specific capabilities that align with the access principle. It requires that health IT systems include mechanisms that allow consumers to access their own clinical information, such as lab results and diagnosis codes, from other sources to include in their personal health records. The use case also aligns with the uses and disclosures principle and includes requirements that allow consumers to control access to their personal health record information and specify which information can be accessed by health care providers and organizations within health information networks. Further, the consumer empowerment use case aligns with the amendments privacy principle, emphasizing the need for policies to guide decisions about which data consumers should be able to modify, annotate, or request that organizations change. (Other use cases that are related to these privacy principles are the “personalized healthcare” and “remote monitoring” use cases.) Under HHS’s initiative to implement a nationwide health information network, in January 2007, four test network implementations, or prototypes, demonstrated potential nationwide health information exchange and laid the foundation for the Office of the National Coordinator’s ongoing network trial implementations. Activities within the prototypes and the trial implementations are related to privacy principles, including the security, access, uses and disclosures, and administrative requirements principles. For example, the prototypes produced specific requirements for security mechanisms (such as data access control and encryption) that address aspects of the security principle. Additionally, the ongoing trial implementations are guided by requirements for using personal health data intended to address the access, uses and disclosures, and administrative requirements principles. For example, participants in the trial implementations are to provide the capability for consumers to access information, such as registration and medication history data, from other sources to include in their personal health records, to control access to self-entered data or clinical information held in a personal health record, and to control the types of information that can be released from personal health records for health information exchange. In addition, organizations participating in the network are required to provide system administrators the ability to monitor and audit all access to and use of the data stored in their systems. The Healthcare Information Technology Standards Panel continued work to “harmonize” standards directly related to several key privacy principles, primarily the security principle. In addition, the panel developed technical guidelines that are intended to address other privacy principles, such as the authorization principle and the uses and disclosures principle. For example, the panel’s guidelines specify that systems should be designed to ensure that consumers’ instructions related to authorization and consent are captured, managed, and available to those requesting the health information. The Certification Commission for Healthcare Information Technology, which is developing and evaluating the criteria and process for certifying the functionality, security, and interoperability of electronic health records, took steps that primarily address the security principle. For example, the commission defined specific security criteria for both ambulatory and inpatient electronic health records that require various safeguards to be in place before electronic health record systems are certified. Among other things, these safeguards include ensuring that system administrators can modify the privileges of users so that only those who have a need to access patients’ information are allowed to do so and that the minimum amount of information necessary can be accessed by system users. The State-Level Health Information Exchange Consensus Project, a consortium of public and private-sector stakeholders, is intended to promote consistent organizational policies regarding privacy and health information exchange. The consortium issued a report in February 2007 that addresses, among other principles, the uses and disclosures privacy principle. For example, the report advises health information exchange organizations to maintain information about access to and disclosure of patients’ personal health information and to make that data available to patients. The consortium subsequently issued another report in March 2008 that recommended practices to ensure the appropriate access, use, and control of health information. Additionally, two of HHS’s key advisory groups continued to develop and provide recommendations to the Secretary of HHS for addressing privacy issues and concerns: The Confidentiality, Privacy, and Security Workgroup was formed in 2006 by the American Health Information Community to focus specifically on these issues and has submitted recommendations to the community that address privacy principles. Among these are recommendations related to the notice principle that the workgroup submitted in February and April 2008. These recommendations stated that health information exchange organizations should provide patients, via the Web or another means, information in plain language on how these organizations use and disclose health information, their privacy policies and practices, and how they safeguard patient or consumer information. The work group also submitted recommendations related to the administrative requirements principle, stating that the obligation to provide individual rights and a notice of privacy practices under HIPAA should remain with the health care provider or plan that has an established, independent relationship with a patient, not with the health information exchange. The National Committee on Vital and Health Statistics, established in 1949, advises the Secretary of HHS on issues including the implementation of health IT standards and safeguards for protecting the privacy of personal health information. The committee’s recent recommendations related to HHS’s health IT initiatives addressed, among others, the uses and disclosures principle. For example, in February 2008, the National Committee submitted five recommendations to the Secretary that support an individual’s right to control the disclosure of certain sensitive health information for the purposes of treatment. Although the recommendations from these two advisory groups are still under consideration by the Secretary, according to HHS officials, contracts for the nationwide health information network require participants to consider these recommendations when conducting network trials once they are accepted by the Secretary. The Office of the National Coordinator also took actions intended to address key challenges to protecting exchanges of personal electronic health information. Specifically, state-level initiatives (described below) were formed to bring stakeholders from states together to collaborate, propose solutions, and make recommendations to state and federal policymakers for addressing challenges to protecting the privacy of electronic personal health information within a nationwide health information exchange. Outcomes of these initiatives provided specific state-based solutions and recommendations for federal policy and guidance for addressing key challenges described by our prior work (see table 2). The Health Information Security and Privacy Collaboration is pursuing privacy and security projects directly related to several of the privacy challenges identified in our prior work, including the need to resolve legal and policy issues resulting from varying state laws and organizational-level business practices and policies, and the need to obtain individuals’ consent for the use and disclosure of personal health information. For example, the state teams noted the need for clarification about how to interpret and apply the “minimum necessary” standard, and they recommended that HHS provide additional guidance to clarify this issue. In addition, most of the state teams cited the need for a process to obtain patient permission to use and disclose personal health information, and the teams identified multiple solutions to address differing definitions of patient permission, including the creation of a common or uniform permission form for both paper and electronic environments. The State Alliance for e-Health created an information protection task force that in August 2007 proposed five recommendations that are intended to address the challenge of understanding and resolving legal and policy issues. The recommendations, which the alliance accepted, focused on methods to facilitate greater state-federal interaction related to protecting privacy and developing common solutions for the exchange of electronic health information. Beyond the initiatives previously discussed, in June 2008, the Secretary released a federal health IT strategic plan that includes a privacy and security objective for each of its strategic goals, along with strategies and target dates for achieving the objectives. For example, one of the strategies is to complete the development of a confidentiality, privacy, and security framework by the end of 2008, and another is to address inconsistent statutes or regulations for the exchange of electronic health information by the end of 2011. The strategic plan emphasized the importance of privacy protection for electronic personal health information by acknowledging that the success of a nationwide, interoperable health IT infrastructure in the United States will require a high degree of public confidence and trust. In accordance with this strategy, the Office of the National Coordinator is responsible for developing the confidentiality, privacy, and security framework. The National Coordinator has indicated that this framework, which is to be developed and published by the end of calendar year 2008, is to incorporate the outcomes of the department’s privacy-related initiatives, and that milestones have been developed and responsibility assigned for integrating these outcomes. The National Coordinator has assigned responsibility for these integration efforts and the development of the framework to the Director of the Office of Policy and Research within the Office of the National Coordinator. In this regard, the department has fulfilled the first part of our recommendation. While the various initiatives that HHS has undertaken are contributing to the development and implementation of an overall privacy approach, more work remains. In particular, the department has not defined a process for ensuring that all privacy principles and challenges will be fully and adequately addressed. This process would include, for example, steps for ensuring that all stakeholders’ contributions to defining privacy-related activities are appropriately considered and that individual inputs to the privacy framework will be effectively assessed and prioritized to achieve comprehensive coverage of all key privacy principles and challenges. Such a process is important given the large number and variety of activities being undertaken and the many stakeholders contributing to the health IT initiatives. In particular, the contributing activities involve a wide variety of stakeholders, including federal, state and private-sector entities. Further, certain privacy-related activities are relevant only to specific principles or challenges, and are generally not aimed at comprehensively addressing all privacy principles and challenges. For example, the certification and standards harmonization efforts primarily address the implementation of technical solutions for interoperable health IT and, therefore, are aimed at system-level security measures, such as data encryption and password protections, while the recommendations submitted by HHS’s advisory committees and state-level initiatives are primarily aimed at policy and legal issues. Effectively assessing the contributions of individual activities could play an important role in determining how each activity contributes to the collective goal of ensuring comprehensive privacy protection. Additionally, the outcomes of the various activities may address privacy principles and challenges to varying degrees. For example, while a number of the activities address the uses and disclosures principle, HHS’s advisory committees have made recommendations that the department’s activities more extensively address the notice principle. Consequently, without defined steps for thoroughly assessing the contributions of the activities, some principles and challenges may be addressed extensively, while others may receive inadequate attention, leading to gaps in the coverage of the principles and challenges. In discussing this matter with us, officials in the Office of the National Coordinator pointed to the various health IT initiatives as an approach that it is taking to manage privacy-related activities in a coordinated and integrated manner. For example, the officials stated that the purpose of the American Health Information Community’s use cases is to provide guidance and establish requirements for privacy protections that are intended to be implemented throughout the department’s health IT initiatives (including standards harmonization, electronic health records certification, and the nationwide health information network). Similarly, contracts for the nationwide health information network require participants to adopt approved health IT standards (defined by the Healthcare Information Technology Standards Panel) and, as mentioned earlier, to consider recommendations from the American Health Information Community and the National Committee on Vital and Health Statistics when conducting network trials, once these recommendations are accepted or adopted by the Secretary. While these are important activities for addressing privacy, they do not constitute a defined process for assessing and prioritizing the many privacy-related initiatives and the needs of stakeholders to ensure that privacy issues and challenges will be addressed fully and adequately. Without a process that accomplishes this, HHS faces the risk that privacy protection measures may not be consistently and effectively built into health IT programs, thus jeopardizing patient privacy as well as the public confidence and trust that are essential to the success of a future nationwide health information network. HHS and its Office of the National Coordinator for Health IT intend to address key privacy principles and challenges through integrating the privacy-related outcomes of the department’s health IT initiatives. Although it has established milestones and assigned responsibility for integrating these outcomes and for the development of a confidentiality, privacy, and security framework, the department has not fully implemented our recommendation for an overall privacy approach that is essential to ensuring that privacy principles and challenges are fully and adequately addressed. Unless HHS’s privacy approach includes a defined process for assessing and prioritizing the many privacy-related initiatives, the department may not be able to ensure that key privacy principles and challenges will be fully and adequately addressed. Further, stakeholders may lack the overall policies and guidance needed to assist them in their efforts to ensure that privacy protection measures are consistently built into health IT programs and applications. As a result, the department may miss an opportunity to establish the high degree of public confidence and trust needed to help ensure the success of a nationwide health information network. To ensure that key privacy principles and challenges are fully and adequately addressed, we recommend that the Secretary of Health and Human Services direct the National Coordinator for Health IT to include in the department’s overall privacy approach a process for assessing and prioritizing its many privacy-related initiatives and the needs of stakeholders. HHS’s Assistant Secretary for Legislation provided written comments on a draft of this report. In the comments, the department generally agreed with the information discussed in our report; however, it neither agreed nor disagreed with our recommendation. HHS agreed that more work remains to be done in the department’s efforts to protect the privacy of electronic personal health information and stated that it is actively pursuing a two-phased process for assessing and prioritizing privacy-related initiatives intended to build public trust and confidence in health IT, particularly in electronic health information exchange. According to HHS, the process will include work with stakeholders to ensure that real-world privacy challenges are understood. In addition, the department stated that the process will assess the results and recommendations from the various health IT initiatives and measure progress toward addressing privacy-related milestones established by the health IT strategic plan. As we recommended, effective implementation of such a process could help ensure that the department’s overall privacy approach fully addresses key privacy principles and challenges. HHS also provided technical comments, which we have incorporated into the report as appropriate. The department’s written comments are reproduced in appendix II. We are sending copies of this report to interested congressional committees and to the Secretary of HHS. Copies of this report will be made available at no charge on our Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-6304 or Linda Koontz at (202) 512-6240, or by e-mail at [email protected] or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and key contributors to this report are listed in appendix III. Our objective was to provide an update on the department’s efforts to define and implement an overall privacy approach, as we recommended in an earlier report. Specifically, we recommended that the Secretary of Health and Human Services define and implement an overall approach for protecting health information that would (1) identify milestones and the entity responsible for integrating the outcomes of its privacy-related initiatives, (2) ensure that key privacy principles in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) are fully addressed, and (3) address key challenges associated with the nationwide exchange of health information. To determine the status of HHS’s efforts to develop an overall privacy approach, we analyzed the department’s federal health IT strategic plan and documents related to its planned confidentiality, privacy, and security framework. We also analyzed plans and documents that described activities of each of the health IT initiatives under the Office of the National Coordinator and identified those intended to (1) develop and implement mechanisms for addressing privacy principles and (2) develop recommendations for overcoming challenges to ensuring the privacy of patients’ information. Specifically, we assessed descriptions of the intended outcomes of the office’s health IT initiatives to determine the extent to which they related to these privacy principles and challenges identified by our prior work. To supplement our data collection and analysis, we conducted interviews with officials from the Office of the National Coordinator to discuss the department’s approaches and future plans for addressing the protection of personal health information within a nationwide health information network. We conducted this performance audit at the Department of Health and Human Services in Washington, D.C., from April 2008 through September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to those named above, key contributors to this report were John A. de Ferrari, Assistant Director; Teresa F. Tucker, Assistant Director; Barbara Collier; Heather A. Collins; Susan S. Czachor; Amanda C. Gill; Nancy Glover; M. Saad Khan; Thomas E. Murphy; and Sylvia L. Shanks.
Although advances in information technology (IT) can improve the quality and other aspects of health care, the electronic storage and exchange of personal health information introduces risks to the privacy of that information. In January 2007, GAO reported on the status of efforts by the Department of Health and Human Services (HHS) to ensure the privacy of personal health information exchanged within a nationwide health information network. GAO recommended that HHS define and implement an overall privacy approach for protecting that information. For this report, GAO was asked to provide an update on HHS's efforts to address the January 2007 recommendation. To do so, GAO analyzed relevant HHS documents that described the department's privacy-related health IT activities. Since GAO's January 2007 report on protecting the privacy of electronic personal health information, the department has taken steps to address the recommendation that it develop an overall privacy approach that included (1) identifying milestones and assigning responsibility for integrating the outcomes of its privacy-related initiatives, (2) ensuring that key privacy principles are fully addressed, and (3) addressing key challenges associated with the nationwide exchange of health information. In this regard, the department has fulfilled the first part of GAO's recommendation, and it has taken important steps in addressing the two other parts. The HHS Office of the National Coordinator for Health IT has continued to develop and implement health IT initiatives related to nationwide health information exchange. These initiatives include activities that are intended to address key privacy principles and challenges. For example: (1) The Healthcare Information Technology Standards Panel defined standards for implementing security features in systems that process personal health information. (2) The Certification Commission for Healthcare Information Technology defined certification criteria that include privacy protections for both outpatient and inpatient electronic health records. (3) Initiatives aimed at the state level have convened stakeholders to identify and propose solutions for addressing challenges faced by health information exchange organizations in protecting the privacy of electronic health information. In addition, the office has identified milestones and the entity responsible for integrating the outcomes of its privacy-related initiatives, as recommended. Further, the Secretary released a federal health IT strategic plan in June 2008 that includes privacy and security objectives along with strategies and target dates for achieving them. Nevertheless, while these steps contribute to an overall privacy approach, they have fallen short of fully implementing GAO's recommendation. In particular, HHS's privacy approach does not include a defined process for assessing and prioritizing the many privacy-related initiatives to ensure that key privacy principles and challenges will be fully and adequately addressed. As a result, stakeholders may lack the overall policies and guidance needed to assist them in their efforts to ensure that privacy protection measures are consistently built into health IT programs and applications. Moreover, the department may miss an opportunity to establish the high degree of public confidence and trust needed to help ensure the success of a nationwide health information network.
As you know, Mr. Chairman, the decennial census is a critical national effort mandated by the Constitution. Census data are used to apportion seats in Congress, redraw congressional districts, allocate billions of dollars in federal assistance to state and local governments, and for numerous other public and private sector purposes. Importantly, the census is conducted against a backdrop of immutable deadlines. In order to meet legally mandated reporting requirements, census activities need to take place at specific times and in the proper sequence. For example, the group quarters validation operation, where census workers verify the location of group quarters, such as nursing homes and college dormitories, needs to be completed after the address canvassing operation. As a result, it is absolutely critical for the Bureau to stay on schedule. Figure 1 shows some dates for selected decennial events. The Bureau estimates that the 2010 Census will cost more than $14 billion over its life-cycle, making it the most expensive census in our nation’s history. According to the Bureau, the increasing cost of the census is caused in part by various societal trends—such as increasing privacy concerns, more non-English speakers, and people residing in makeshift and other nontraditional living arrangements—making it harder to find people and get them to participate in the census. Automation and IT will play a critical role in the success of the 2010 Census by supporting data collection, analysis, and dissemination. According to the Bureau’s estimates, it is spending more than $3 billion on IT acquisitions for the census. The Bureau is relying on both the acquisition of new systems and the enhancement of existing legacy systems for conducting operations for the 2010 Census. These systems are to play important roles with regard to different aspects of the process, such as providing geographic information to establish where to count, capturing and integrating census responses, supporting field operations such as address canvassing, and tabulating and publicly disseminating census data. Accurate cost estimates are essential to a successful census because they help ensure that the Bureau has adequate funds, and so that Congress, the administration, and the Bureau itself can have reliable information on which to base or advise decisions. However, as we have reported before, the Bureau has insufficient policies and procedures and inadequately trained staff for conducting high-quality cost estimation for the decennial census. The Bureau does not have cost estimation guidance and procedures in place or staff that is certified in cost estimation techniques. The Bureau is developing a new budget management tool that will support the cost estimation process beyond 2010. As part of that, the Bureau will need to establish rigorous cost estimation policies and procedures and use skilled estimators to ensure that future cost estimates are reliable and of high quality. For example, to help manage the 2010 census and contain costs, over 5 years ago we recommended that the Bureau develop a comprehensive, integrated project plan for the 2010 Census that should include the itemized, estimated costs of each component and a sensitivity analysis and an explanation of significant changes in the assumptions on which these costs were based. In response, the Bureau provided us with the 2010 Census Operations and Systems Plan, dated August 2007. This plan represented an important step forward by including operational inputs and outputs and describing linkages among operations and systems. However, that document did not include itemized cost estimates of each component or sensitivity analysis, and thus did not provide a valid baseline or range of estimates for the Bureau and Congress. The Bureau has provided annual cost updates as part of its budget submission process, but these too have lacked cost analyses to support them. As the Bureau approaches the final surge in the current decade-long decennial spending cycle, providing reliable cost estimates accompanied by sound justification, as we have recommended, will be important if Congress is to make informed decisions about the levels at which to fund the remainder of the 2010 Census. A complete and accurate list of all addresses where people live in the country is the cornerstone of a successful census because it identifies all households that are to receive a census questionnaire and serves as the control mechanism for following up with households that fail to respond. The Bureau goes to great lengths to develop a quality address list and maps, working with the U.S. Postal Service; federal agencies; state, local, and tribal governments; local planning organizations; the private sector; and nongovernmental entities. For example, under the Local Update of Census Addresses (LUCA) program, the Bureau is authorized to partner with state, local, and tribal governments, tapping into their knowledge of local populations and housing conditions in order to secure a more complete count. Between November 2007 and March 2008, over 8,000 state, local, and tribal governments provided approximately 8 million address updates through the LUCA program. The Bureau will send thousands of temporary census workers, known as listers, into the field to collect and verify address information and update maps on-site, including verifying address updates provided through the LUCA program. Despite the Bureau’s efforts, an inherent challenge is locating unconventional and hidden housing units, such as converted basements and attics. For example, as shown in figure 2, what appears to be a small, single-family house could contain an apartment, as suggested by its two doorbells. The Bureau has trained listers to look for extra mailboxes, utility meters, and other signs of hidden housing units and is developing training guides for 2010 to help listers locate hidden housing. Nonetheless, decisions on what is a habitable dwelling are often difficult to make—what is habitable to one worker may seem uninhabitable to another. According to Bureau estimates, approximately 1.4 million housing units were missed in the 2000 Census. If an address is not in the Bureau’s address file, its residents are less likely to be included in the census. A nationwide address canvassing operation for the 2010 Census is scheduled to begin this spring, when listers will use handheld computers for the first time to collect address data. Listers will add addresses that do not already appear on the Bureau’s list and mark for deletion any that they cannot verify according to the rules and guidance developed by the Bureau. When the handheld computers were tested during the dress rehearsal of the address canvassing operation, the devices experienced such problems as slow or inconsistent data transmission, freeze-ups, and difficulties collecting mapping coordinates. The software that managers used to review worker productivity and assign work was also troublesome. For example, management reports were unreliable because they pulled data from incorrect sources, and Bureau staff had difficulty using the work management software to reassign work. The Bureau took steps to fix these issues and, in December 2008, conducted a limited field test in Fayetteville, North Carolina, to test the functionality and usability of the handheld computer, including whether the handheld computer problems encountered earlier had been resolved. Although the Bureau’s final evaluation of the field test was due by the end of February 2009, we were not able to review it for this testimony. From our observations of the December 2008 field test and interviews with Bureau officials, the Bureau appears to have addressed many of the handheld computer performance issues, as well as the problems with the work management software, observed during the dress rehearsal. This is an important and noteworthy development. Nonetheless, more information is needed to determine the Bureau’s overall readiness for address canvassing as the field test was not an end- to-end systems evaluation, did not validate all address canvassing requirements, such as training and help desk support, and did not include urban areas. Additionally, the scale of the field test was a fraction of that of the address canvassing operation. The Bureau was to conduct a review of the readiness of the handheld computers in January 2009 but has not yet reported the results of that review. Finally, the Bureau’s actual workload for address canvassing—about 144.7 million addresses—is 11 million addresses more than the Bureau had planned for, leaving the Bureau with too few handheld computers to complete the workload in the time originally scheduled. In response, the Bureau will be extending the amount of time listers will be working in the field in affected areas, although not extending the end date of the operation, to compensate for the larger workload. During the dress rehearsal, listers also experienced problems when collecting address data for large blocks having more than 1,000 housing units. According to the Bureau, the handheld computer did not have the capacity to efficiently collect data for large blocks. The Bureau has taken steps to mitigate this problem. Specifically, in January 2009, the Bureau began using laptop computers and software already used in other operations to canvass the 2,086 blocks it identified as large blocks, and by the end of February 2009, the Bureau had completed approximately 80 percent of large-block canvassing. In February 2009 we observed large- block canvassing in Atlanta, Georgia; Boston, Massachusetts; New York, New York; San Francisco, California; and Washington, D.C. From our preliminary observations, the laptops appear to work well, and listers reported their training was satisfactory. We are in the process of discussing these and other observations with the Bureau. The Bureau’s largest and most costly field operation is nonresponse follow-up. The Bureau estimates that it will employ over 600,000 temporary workers to collect data from about 47 million nonresponding households over the course of 10 weeks in 2010. On April 3, 2008, the Bureau announced that it would no longer use handheld computers for nonresponse follow-up and would instead change to a paper-based nonresponse follow-up operation. According to the Bureau, this change added between $2.2 billion to $3 billion to the total cost of the census. In May 2008, the Bureau issued a plan that covered major components of the paper-based nonresponse follow-up. Bureau officials said that they are developing a more detailed plan that would describe 2010 nonresponse follow-up operations and systems, workflow, major milestones, and roles and responsibilities of different census divisions. Although the plan was due in January 2009, it has yet to be completed. Because this plan serves as a road map for monitoring the development and implementation of nonresponse follow-up, it will be important for the Bureau to complete this plan. The Bureau has changed plans for many aspects of nonresponse follow-up, and officials are determining which activities and interfaces will be tested and when that testing will occur. Although the Bureau has carried out a paper-based follow-up operation in past decennials, the 2010 Census includes new procedures and system interfaces that have not been tested under census-like conditions because they were dropped from the dress rehearsal. Bureau officials acknowledged the importance of testing new and modified nonresponse follow-up activities and system interfaces in order to reduce risk but have not yet developed detailed testing plans. Given the number of tasks at hand and the increasingly shorter time frame in which to accomplish them, it will be important for the Bureau to monitor the development of these testing plans, coordinate this testing with other activities, and ensure that testing occurs in time to take corrective actions, if needed. In our previous work, we have highlighted the importance of sound risk management in planning for the decennial census. The Bureau has strengthened aspects of its risk management process. For example, in July 2008, the Bureau identified 31 nonresponse follow-up risks, such as lower than expected enumerator productivity. However, it has not developed mitigation plans for these risks. Officials said that they are reevaluating these risks and plan to develop mitigation plans for high- and medium- priority nonresponse follow-up risks starting in spring 2009. However, the Bureau has not yet determined when these plans will be completed. One of the Bureau’s long-standing challenges is resolving conflicting information respondents provide on census forms. This problem can occur, for example, when the number of household members reported on a completed form differs from the number of persons for whom information is provided. In such instances, the Bureau attempts to reconcile the data during the coverage follow-up operation. For 2010, the Bureau plans to expand the scope of this operation and include two questions—known as coverage probes—on the census form to help identify households where someone may have been missed or counted incorrectly (see fig. 3). However, after testing the probes earlier in the decade, the Bureau found one of the probes was problematic in identifying persons potentially missing from the count. Although these probes were included on the forms mailed out during the dress rehearsal, the coverage follow-up operation did not include cases from nonresponse follow-up, which was cancelled from the dress rehearsal. In the absence of a final test of the coverage probes in nonresponse follow-up, the effectiveness of the information generated by the probes is uncertain. A successful census depends, in large part, on the work carried out in the local census offices. For the 2010 Census, this field work cannot be accomplished without a properly functioning OCS. This system is intended to provide managers with essential, real-time information, such as worker productivity and completion rates for field operations. It also allows managers to assign and reassign cases among workers. If the system does not work as intended, it could bog down or delay field operations and introduce errors into data collected. Initially, the Bureau had planned to use a contractor to develop OCS to manage the workflow for those operations relying on paper-based processes, such as group quarters enumeration and nonresponse follow- up. However, in August 2008, the Bureau created an internal program to develop OCS and other related infrastructure that are needed to support these operations. The Bureau is still in the process of developing OCS for paper-based operations. Although the Bureau has established a high-level schedule for testing OCS, it has not yet finalized the requirements needed to begin its programming or developed a detailed schedule for conducting additional tests. Further, the Bureau has not yet fully defined how OCS will work together with other systems. According to Bureau officials, the lack of detailed plans for operations, such as nonresponse follow-up, makes it difficult to finalize requirements for OCS or its testing plans. Our work on IT systems testing has shown that without adequate oversight and more comprehensive guidance, the Bureau cannot ensure that it is thoroughly testing its systems and properly prioritizing testing activities before the 2010 Census. The Bureau estimates that it will need to produce approximately 30 million different map files from which 80 million paper maps will be printed to assist census workers in locating addresses in major census operations. The quality of maps and the timing of map printing are critical to the success of the census. In addition, many map production and printing activities must be conducted in sequence with no time to spare, putting at risk the Bureau’s ability to print its maps on time. The Bureau has taken positive steps to meet its requirements for map production and printing for 2010. For example, in June 2008, the Bureau decided to produce a generic map type in lieu of several operation-specific versions to reduce the number of map files to be produced. Furthermore, the Bureau is preparing to print most of its maps at the local census offices rather than at the regional offices, reducing the need to coordinate map delivery to the local census offices. In addition, the Bureau has replaced its labor-intensive quality assurance process with integrated, automated processes. These steps taken to improve workflow will become particularly important as the Bureau works to produce and print maps on an already compressed schedule. The Bureau’s schedule for producing and printing maps does not allow for any delays in receiving data from other operations or from the contractor delivering map files. For example, the Bureau intends to include map information from address canvassing, which ends in July 2009, in maps that will be used to validate locations of group quarters, which begins in September 2009. Bureau officials have stated that the turnaround time between these operations allows no slippage, and if these data are received late, an entire chain of subsequent map production steps would be thrown off schedule. Furthermore, according to the Bureau, local census offices need to receive map files from the contractor in time to print maps for certain field operations by January 8, 2010. However, the contractor is not scheduled to finish delivering the map files until January 19, 2010. Bureau officials said that they have taken steps to ensure that the necessary map files are delivered in time for printing but are still working to resolve the discrepancy. The Bureau goes to great lengths to reduce the undercount, especially for those groups likely to be undercounted at a higher rate than others, such as minorities and renters. For example, the Bureau plans to provide language assistance guides in 59 languages for completing the census, an increase from 49 languages in 2000. For the first time in 2010, the Bureau plans to send bilingual questionnaires to approximately 13 million households that are currently likely to need Spanish language assistance, as determined by analyzing recent data from a related Bureau survey program. The Bureau also plans to deploy a multifaceted communications campaign consisting of, among other efforts, paid advertising and the hiring of as many as 680 partnership staff who will be tasked with reaching out to local governments, community groups, and other organizations in an effort to secure a more complete count. Overall, the Bureau estimates it will spend around $410 million on its communication efforts for the 2010 Census. However, in constant 2010 dollars, this amount is somewhat less than the approximately $480 million that the Bureau spent marketing the 2000 Census. Although the effects of the Bureau’s communication efforts are difficult to measure, the Bureau reported some positive results from its 2000 Census marketing efforts with respect to raising awareness of the census. For example, four population groups—non-Hispanic Blacks, non-Hispanic Whites, Asians, and Native Hawaiians—indicated they were more likely to return the census form after the 2000 Census partnership and marketing program than before its onset. However, a Bureau evaluation demonstrated only a limited linkage between the partnership and marketing effort and improvements in actual census mail return behavior for these or other groups. Put another way, while the Bureau’s marketing activities might raise awareness of the census, a remaining challenge is converting that awareness into an actual response. Other marketing challenges include long-standing issues such as the nation’s linguistic diversity and privacy concerns, as well as a number of newly emerging concerns, such as local campaigns against illegal immigration and a post- September 11 environment that could heighten some groups’ fear of government agencies. Since 2005, we have reported on weaknesses in the Bureau’s management of its IT acquisitions, and we remain concerned about the Bureau’s IT management and testing of key 2010 Census systems. For example, in October 2007, we reported on the status of and plans for key 2010 Census IT acquisitions and whether the Bureau was adequately managing associated risks. We found critical weaknesses in the Bureau’s risk management practices, including those associated with risk identification, mitigation, and oversight. We later presented multiple testimonies on the Bureau’s progress in addressing significant risks facing the 2010 Census. In particular, the Field Data Collection Automation (FDCA) program, which includes the development of handheld computers for the address canvassing operation and the systems, equipment, and infrastructure that field staff will use to collect data, has experienced significant problems. For example, in March 2008, we testified that the FDCA program was experiencing schedule delays and cost increases, and was contributing significant risk to the 2010 Census. At that time, we highlighted our previous recommendations to better manage FDCA and the other IT acquisitions. In response to our findings and recommendations, the Bureau has taken several steps to improve its management of IT for the 2010 Census. For example, the Bureau has sought external assessments of its activities from independent research organizations, implemented a new management structure and management processes and brought in experienced personnel to key positions, and improved several reporting processes and metrics. In part due to our review of the FDCA program, the Bureau requested a revised cost proposal for the FDCA program, which resulted in a cost reduction of about $318 million for the remaining 5-year life-cycle of the program. As we have previously reported, operational testing planned during the census dress rehearsal would take place without the full complement of systems and functionality that was originally planned, and it was unclear whether the Bureau was developing plans to test all interrelated systems and functionality. At your request, we reviewed the status and plans of testing of key 2010 Census systems. As stated in our report, which we are releasing today, we found that the Bureau has made progress in conducting systems, integration, and end-to-end testing, but critical testing still remains to be performed before systems will be ready to support the 2010 Census, and the planning, execution, and monitoring of its testing needs much improvement. We are making 10 recommendations for strengthening the Bureau’s testing of 2010 Census systems. Those recommendations address improvements needed in test planning, management, and monitoring. In response to our report, the Department of Commerce and the Bureau stated they had no significant disagreements with our recommendations. In summary, little more than a year remains until Census Day. At a time when major testing should be completed and there should be confidence in the functionality of key operations, the Bureau instead finds itself managing late design changes and developing testing plans. The Bureau has taken some important steps toward mitigating some of the challenges that it has faced to date, yet much remains uncertain, and the risks to a successful decennial census remain. Addressing these risks and challenges will be critical to the timely completion of a cost-effective census, and it will be essential for the Bureau to develop plans for testing systems and procedures not included in the dress rehearsal, and for Congress to monitor the Bureau’s progress. As always, we look forward to working with Congress in assessing the Bureau’s efforts to overcome these hurdles to a successful census and providing regular updates on the rollout of the decennial in the critical months that lie ahead. Mr. Chairman and members of the Subcommittee, this concludes our statement. We would be happy to respond to any questions that you or members of the Subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact Robert Goldenkoff at (202) 512-2757 or David A. Powner at (202) 512-9286 or by e-mail at [email protected] or [email protected]. Other key contributors to this testimony include Sher’rie Bacon, Thomas Beall, Steven Berke, Vijay D’Souza, Elizabeth Fan, Richard Hung, Andrea Levine, Signora May, Ty Mitchell, Catherine Myrick, Lisa Pearson, Kathleen Padulchick, Crystal Robinson, Melissa Schermerhorn, Cynthia Scott, Karl Seifert, Jonathan Ticehurst, Timothy Wexler, and Katherine Wulff. High Risk: An Update. GAO-09-271. Washington, D.C.: January 2009. 2010 Census: The Bureau’s Plans for Reducing the Undercount Show Promise, but Key Uncertainties Remain. GAO-08-1167T. Washington, D.C.: September 23, 2008. 2010 Census: Census Bureau’s Decision to Continue with Handheld Computers for Address Canvassing Makes Planning and Testing Critical. GAO-08-936. Washington, D.C.: July 31, 2008. 2010 Census: Census Bureau Should Take Action to Improve the Credibility and Accuracy of Its Cost Estimate for the Decennial Census. GAO-08-554. Washington, D.C.: June 16, 2008. 2010 Census: Plans for Decennial Census Operations and Technology Have Progressed, But Much Uncertainty Remains. GAO-08-886T. Washington, D.C.: June 11, 2008. 2010 Census: Bureau Needs to Specify How It Will Assess Coverage Follow-up Techniques and When It Will Produce Coverage Measurement Results. GAO-08-414. Washington, D.C.: April 15, 2008. 2010 Census: Census at Critical Juncture for Implementing Risk Reduction Strategies. GAO-08-659T. Washington, D.C.: April 9, 2008. Information Technology: Significant Problems of Critical Automation Program Contribute to Risks Facing 2010 Census. GAO-08-550T. Washington, D.C.: March 5, 2008. Information Technology: Census Bureau Needs to Improve Its Risk Management of Decennial Systems. GAO-08-259T. Washington, D.C.: December 11, 2007. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington, D.C.: March 1, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The decennial census is a constitutionally-mandated activity that produces data used to apportion congressional seats, redraw congressional districts, and allocate billions of dollars in federal assistance. In March 2008, GAO designated the 2010 Census a high-risk area in part because of problems with the performance of handheld computers used to collect data. The U.S. Census Bureau has since strengthened its risk management efforts and made other improvements; however, the Bureau curtailed a dress rehearsal scheduled for 2008 and was unable to test key operations under census-like conditions. This testimony discusses the Bureau's readiness for 2010 and covers: (1) the importance of reliable cost estimates; (2) building a complete and accurate address list; (3) following up on missing and conflicting responses to ensure accuracy; (4) targeting outreach to undercounted populations; and (5) designing, testing, and implementing technology for the census. The testimony is based on previously issued and ongoing GAO work. The Bureau estimates the 2010 Census will cost more than $14 billion over its life-cycle, making it the most expensive census in the nation's history, even after adjusting for inflation. Accurate cost estimates help ensure that the Bureau has adequate funds, and that Congress, the administration, and the Bureau itself have reliable information on which to base advice and decisions. However, as GAO has reported before, the Bureau has insufficient policies and procedures and inadequately trained staff for conducting high-quality cost estimation for the decennial census. A successful census requires a complete and accurate address list. The Bureau sends thousands of census workers (listers) into the field to collect and verify address information, and this year for the first time, listers will use handheld computers to collect data. During the dress rehearsal, there were significant technical problems. A small-scale field test showed that these problems appear to have been addressed; however, the test was not carried out under full census-like conditions and did not validate all address canvassing requirements. Nonresponse follow-up, the Bureau's largest and most costly field operation, was initially planned to be conducted using the handheld computers, but was recently changed to a paper-based system due to technology issues. The Bureau has not yet developed a detailed road map for monitoring the development and implementation of nonresponse follow-up under the new design. Such a plan is essential to conducting a successful nonresponse follow-up. Furthermore, the system that manages the flow of work in field offices is not yet developed. Lacking plans for the development of both nonresponse follow-up and this management system, the Bureau faces the risk of not having them developed and fully tested in time for the 2010 Census. In an effort to reduce the undercount, the Bureau is implementing a program of paid advertising integrated with other communications strategies, such as partnerships with state, local, and tribal governments and community organizations. Moving toward 2010, the Bureau faces long-standing challenges with the nation's linguistic diversity and privacy concerns, which can contribute to the undercounting of some groups. Since 2005, we have reported concerns with the Bureau's management and testing of key IT systems. We have reviewed the status and plans for the testing of key 2010 Census systems. The Bureau has made progress in conducting systems, integration, and end-to-end testing, but critical testing still remains to be performed before systems will be ready to support the 2010 Census, and the planning for the testing needs much improvement. In short, while the Bureau has made some noteworthy progress in gearing up for the enumeration, with just over a year remaining until Census Day, uncertainties surround the Bureau's overall readiness for 2010.
Congress established the HBCU Capital Financing Program in 1992 under Title III, Part D, of the Higher Education Act of 1965, as amended, to provide HBCUs with access to low-cost capital to help them to continue and expand upon their educational missions. (See app. II for locations of HBCUs eligible to participate in the program.) Program funds, raised through bonds issued by the DBA and purchased by the FFB, are lent to eligible schools with qualified capital projects. Loan proceeds may be used for—among other things—repairing, renovating, or in exceptional circumstances, constructing and acquiring new instructional or residential facilities, equipment, or research instrumentation. Additionally, schools are able to refinance prior capital loans. Education guarantees loan repayment. Although Education administers the program, the DBA is responsible for many of the program’s operations and is subject to departmental oversight. Specifically, the DBA works with prospective borrowers to develop loan applications and monitors and enforces loan agreements. The loan process consists of multiple steps. HBCUs interested in obtaining funds through the program must first complete a preliminary application that includes information such as enrollment, some financial data— including a description of existing debt—and proposed capital projects. On the basis of this information, the DBA determines whether the school should formally complete an application, which includes more detailed financial information, such as audited financial statements and various campus plans and assessments. To be approved for the loan, an HBCU must satisfy certain credit criteria and have qualified projects. Once the DBA determines a school’s eligibility status, a memorandum is sent to Education for final approval. When approved, the loan goes through a closing process during which certain terms and conditions may be negotiated. Table 1 describes key loan terms and conditions to which schools are subject. The Federal Credit Reform Act of 1990, along with guidance issued by OMB and accounting standards, provides the framework agencies are to use in calculating the federal budget costs of federal credit programs, such as the HBCU Capital Financing Program. The two principles of credit reform are defining subsidy cost and requiring that budget authority to cover these costs be provided in advance before new loan obligations are incurred. OMB is responsible for coordinating the estimation of subsidy costs. Subsidy costs are determined by calculating the net present value of estimated cash flows to and from the government that result from providing loans and loan guarantees to borrowers. (Guaranteed loans that are financed by the FFB are treated as direct loans for budgetary purposes, in accordance with FCRA.) Cash flows for direct loans include, for example, loan disbursements to borrowers and borrower repayments of principal and payments of interest to the government. Estimated cash flows are adjusted to reflect the risks associated with potential borrower delinquencies and defaults, and estimates of amounts collected on defaulted loans. Subsidy costs can be positive or negative. If the net present value of cash outflows exceeds the net present value of cash inflows, the government incurs a positive subsidy cost. On the other hand, the government realizes a gain in revenue if there is a negative subsidy. Since the program was established, appropriations legislation has, in general, limited the subsidy costs of the program to be no greater than zero. In addition, the legislation authorizing the program established a credit authority limit of $375 million; of this amount, private HBCUs are collectively limited to borrowing $250 million, and public HBCUs are collectively limited to borrowing $125 million. Over a period of 2 months in 2005, three hurricanes struck the Gulf Coast region of the United States, resulting in more than $118 billion in estimated property damages. Two of these hurricanes, Katrina and Rita, struck New Orleans and surrounding areas within a month of each other, resulting in significant damages to several institutions of higher education in the region and including the campuses of several HBCUs, including Dillard University, Southern University at New Orleans, and Xavier University, in Louisiana, and Tougaloo College, in Mississippi. (See app. III for locations of the 8 hurricane affected HBCUs.) In June 2006, Congress passed the Emergency Act which, among other things, amends the HBCU Capital Financing Program to assist hurricane- affected HBCUs in their recovery efforts. To be eligible, a school must be located in an area affected by a Gulf Coast hurricane disaster and demonstrate that it (1) incurred physical damage resulting from Hurricane Katrina or Rita; (2) has pursued other sources of compensation from insurance, Federal Emergency Management Agency (FEMA), or the Small Business Administration, as appropriate; and (3) has not been able to fully reopen in existing facilities or to the levels that existed before the hurricanes because of physical damage to the institution. Key provisions include a lowered interest rate and cost of issuance (both set at 1 percent or less), elimination of the escrow, and deferment of principal and interest payments from program participants for a 3-year period. The Emergency Act also provides the Secretary of Education with authority to waive or modify any statutory or regulatory provisions related to the program in connection with a Gulf Coast hurricane disaster. FEMA assists states and local governments with the costs associated with disaster response and recovery efforts that exceed a state or locale’s capabilities. Grants are also provided to eligible postsecondary educational institutions to help them recover from the disaster. Some institutions of higher education are subsequently provided with referrals to SBA when seeking assistance from FEMA. For private, nonprofit institutions, SBA’s disaster loans are designed to be a primary form of federal assistance. Unlike their public counterparts, private colleges must apply for low-interest, long-term disaster loans prior to seeking assistance from FEMA. Schools may apply for SBA loans, and the aggregate loan amount cannot exceed $1.5 million. In general, the loan terms for each loan include a maximum of 30 years for repayment with interest rates of at least 4 percent. HBCU officials we interviewed reported extensive and diverse capital project needs, including construction and renovation of facilities and addressing deferred maintenance, yet just over half of the available program loan capital has been borrowed. While HBCU capital project needs are not well documented by national studies, the schools themselves have individually identified and documented them. Despite reported needs, only about a quarter of HBCUs have taken steps to participate in the program, and about half of these HBCUs became borrowers. Education has collected and reported limited information on the program’s utilization and has not established performance measures or goals to gauge program effectiveness, though Education officials noted that they are currently working on developing such measures and goals. There are few national studies that document the capital project needs of HBCUs, and they do not provide a current and comprehensive national picture. The four that we identified and reviewed are more than several years old, narrowly scoped, or had limited participation. Specifically, the studies are between 6 and 17 years old, and two studies focused only on specific types of need—renovation of historic properties and campus wiring for computer networks. One study that addressed a broader range of needs and was among the most recent had a low response rate— 37 percent. Despite the lack of national studies, schools that we interviewed reported extensive, diverse, and ongoing capital project needs. School officials reported that they routinely conduct facility assessments as part of their ongoing strategic planning and that these assessments help determine the institutions’ short- and long-term capital needs. They said that capital projects, including the construction of new dormitories, renovation of aging or historic facilities, repair of infrastructure, and addressing long- standing deferred maintenance are needed for a variety of reasons. New facilities such as dormitories and student centers are often needed as a result of enrollment growth, for example, while modernization of existing facilities is needed to accommodate technological advances. For example, Tuskegee University renovated an existing facility to house its hospitality management program, creating modern meeting facilities along with a full- service hotel, which provides students with a real-world laboratory in which they gain immediate hands-on experiences (see fig. 1). In addition, many of the school officials who we interviewed reported that their schools had particularly old facilities, many of which are listed in the National Register of Historic Places. Some school officials cited their need to repair or replace campus infrastructure. For example, some schools reported needing to replace leaking underground water pipes, while others reported the need to replace 100-year-old water and gas pipes. Many of the school officials we interviewed reported having deferred maintenance projects, some for over 15 years, and officials from 3 schools estimated their schools’ deferred maintenance to be over $50 million. For some schools, the deferred maintenance is substantial in light of existing resources, according to HBCU officials. These types of capital projects are essential to ensuring student safety and preserving assets that directly affect their ability to attract, educate, and retain students. Over the life of the program, approximately 14 percent of HBCUs have borrowed just over half of the available funds despite the substantial needs reported by schools. Specifically, 23 HBCUs, according to Education, have taken steps to participate in the program, and 14 became borrowers, with loans totaling just over $200 million—below the program’s $375 million total limit. About 20 percent of the eligible private institutions have borrowed a little more than half of the $250 million allotted for private schools, and less than 8 percent of public institutions have borrowed less than two-thirds of the $125 million allotted for public schools. To date, loan participants have all been 4-year institutions. Taking into account loan repayments, the total amount of outstanding loans was about $168 million as of August 2006, leaving about $207 million available for loans (about $66 million for public schools and about $141 million for private schools). Table 2 shows the participants and the amounts of their loans. Regarding other schools that took steps to participate in the program but did not become borrowers, 6 schools were reported to have withdrawn their applications, and 6 others had applications pending. To date, only one school has been denied a loan. Education has collected and reported limited information concerning HBCUs’ capital financing needs and the schools’ utilization of the program. Education officials said that, beginning in 2005, to understand schools’ financing needs and whether the program could assist schools, the DBA engaged in an outreach effort through which it identified 15 schools that might be candidates for the program. Over the history of the program, Education has collected some information to track program utilization, including the number of inquiries and applications received and the loan volume requested, approved, and awarded. However, Education has not widely reported such data. Education has provided certain elements of its program utilization data to Congress’ appropriations committees via its annual justifications of budget estimates documents. Table 3 shows the data collected by Education to track program utilization. Education officials noted that while the data they collect are useful to indicate the extent to which the schools have used or accessed the program, they are inadequate to address questions concerning whether the program is under- or overutilized or to demonstrate program effectiveness. These officials noted that they believe program performance measures would be useful but that developing such measures is particularly challenging for a credit program like the HBCU Capital Financing Program. This is so in part because participation in a loan program is dependent on complex factors, such as schools’ funding needs, the availability of other sources of financing, and schools’ desire and capacity to assume debt. Program officials cautioned against setting firm program participation goals, for example, because they would not want Education to be perceived as “pushing” debt onto schools that either do not want to, or should not, assume loan obligations before their circumstances warrant doing so. Another complicating factor program officials cited was the small number of potential program beneficiaries. One Education official noted that Education has established performance goals and measures for its student grant and loan aid programs, which are based on sophisticated survey mechanisms designed to measure customer (students, parents, and schools) satisfaction with the department’s aid application, receipt, and accounting processes. Because the scope of the student aid programs is large, encompassing millions of students and parents and thousands of schools, it is reasonable to develop and use such measures, the official noted. In contrast, such measures may not be meaningful given the small number of HBCUs and the frequency with which loans are made under the Capital Financing Program. Nevertheless, these officials told us that they believe program performance measures would be useful to gauge program effectiveness. They have established a working group to develop performance measures for the program and were consulting with OMB and other federal officials with expertise on federal credit programs to guide their efforts. The officials noted that they do not have any firm schedule with respect to completing their development of program performance measures. The HBCU loan program provides access to low-cost capital financing and flexibilities not always available elsewhere, but some loan terms and conditions discourage participation, though school officials said they remain interested in the program. The low interest rate and long repayment period were regarded favorably by participants and nonparticipants alike, and the program makes funds available for a broader range of needs than some federal grant programs. However, the pooled escrow arrangement, monthly repayment terms, and the extent to which some loans have been collateralized could discourage participation. The HBCU Capital Financing Program provides lower-cost financing and longer loan maturities and may be used for a broader range of capital projects by a greater number of schools than other funding sources, according to HBCU officials. Some officials noted that the program offers loans with lower interest rates than traditional bank loans. Moreover, the program’s interest rates are typically less than the interest rates schools would be required to pay investors if they issued their own bonds to raise funds. According to school officials and bond industry experts, some HBCUs could obtain, and some have obtained, lower interest rates than those offered under the program by issuing their own tax-exempt bonds. However, this is predicated on a school’s ability to obtain a strong credit rating from a credit rating agency. Schools with weaker or noninvestment grade credit ratings would likely have to pay investors higher interest rates. In addition, schools issuing taxable bonds would likely pay higher interest rates to investors, compared to the program’s interest rates, regardless of the schools’ credit ratings. While schools can lower interest rates paid to bond investors by purchasing bond insurance, the cost to do so may be prohibitive. For these reasons, officials at Education and HBCUs, as well as bond industry experts, told us that the HBCU Capital Financing Program may be ideally suited for schools that have or would receive a noninvestment grade rating. Participation in the program may also benefit schools by enhancing their ability to issue their own bonds in the future. An official at one HBCU, for example, told us that obtaining and repaying a loan under the program had allowed the school to demonstrate its fiscal stability and to subsequently issue its own bond with a lower interest rate than was then being offered under the program. In addition to citing lower interest rates, a large majority of the HBCU officials we spoke to said that the program’s 30-year loan repayment period was attractive, and some noted that private funding sources would likely offer 20 years or less. Some school officials noted that the longer repayment period allowed schools to borrow more or reduce the amount of monthly payments. Borrowing larger amounts, officials reported, allowed them to finance larger or more capital projects. Another school we spoke with that once considered using the program said that even though it was able to issue a tax-exempt bond and obtain a more favorable interest rate, it could only obtain a 20-year maturity period for the bond. Some HBCU officials told us they preferred grants to loans but noted that, in general, compared to other federal grant programs, more HBCUs are eligible for the HBCU loan program, and that it also funds a wider variety of projects. Grants are available for most HBCUs under the Higher Education Act’s strengthening institutions programs, also administered by Education, which fund capital projects as well as other activities, such as faculty and academic program development. However, fewer HBCUs are eligible for other federal grant programs that provide funding for capital projects. For example, the Department of Agriculture’s 1890 Facilities Grant Program is only for those 18 HBCUs that are land grant institutions. Similarly, the Department of Health and Human Services’ facilities improvement program provides only for those HBCUs with a biomedical and behavioral research program. While there are a variety of other assistance programs offered by charitable foundations, and state and local governments, available funding is limited. HBCU officials we spoke with—participants and nonparticipants alike— reported that a disincentive to participation in the program was the pooled escrow; additionally, other terms and conditions, such as the monthly repayment schedule and the extent to which loans are collateralized, were also viewed by some as deterrents. Over half of HBCU respondents we spoke with—both participants and nonparticipants—agreed that the pooled escrow was a drawback, and over one-fifth said that it actually deters participation. The escrow funds, which reduce the federal budget cost of the program by offsetting the estimated costs associated with delinquent loan repayments and borrower defaults, net of collections, are returned to program participants if no such losses occur. However, a recent default by one borrower—the first to occur in the program’s history—has heightened awareness among program participants of the financial risk for them inherent in the pooled escrow arrangement. Since the default, Education has withdrawn funds from participating schools’ escrow accounts twice and will continue doing so until the default is resolved, leaving other schools uncertain as to how much of their own escrow accounts will remain or be replenished. The pooled escrow feature also presents a problem for state institutions because they are prohibited from assuming the liability of another institution. One program official said that this issue was common for state schools because state law prohibits the lending of public funds to nonstate entities—considered to be the case when state funds in escrow are used to hedge against the delinquency of another institution. One participating public HBCU reported that it had to resolve this problem by accounting for its escrow payments as a fee that would not be returned to the school rather than a sum that could be recovered, as the program intends. Because the escrow feature is mandated by law, any changes to this arrangement would require congressional authorization. Additionally, in order to maintain the federal subsidy cost of the program at or below zero, other alternatives— such as assessing additional fees on borrowers, or requiring contributions to an alternative form of a contingency reserve—would be necessary in the absence of the pooled escrow arrangement. While frequency of payments is not as prevalent a concern as the pooled escrow, some schools objected to the program’s requirement that repayments be made monthly as opposed to semiannually, as is common in the private market. Schools participating in the HBCU Program have been required by the DBA to make payments monthly, although FFB lending policy is to require repayments only on a semiannual basis. Despite the fact that participants have met the terms of an extensive credit evaluation process, DBA officials expressed the view that the monthly repayment requirement promotes good financial stewardship on the part of the schools. However, some HBCU officials said that they incur opportunity costs in making payments on a monthly versus a semiannual basis. They also noted that it would be more practical if payments were to coincide with the beginning of their semesters, when their cash flows are typically more robust. Additionally, almost half of the participating schools expressed concern about the amount of collateral they had to pledge in order to obtain a loan. In most cases, program participants have pledged certain real property as collateral, though endowment funds and anticipated tuition revenue are also allowed as collateral. Some HBCU officials said their loans were overcollateralized in that the value of the real estate pledged as security exceeded the value of the loan. They noted that such circumstances can present a problem for those schools trying to obtain additional capital financing without sufficient assets remaining available as collateral. One nonparticipant cited the collateral required of other institutions as a reason for its decision not to participate. When asked about the amount of collateral required, Education and DBA officials reported that the extent and amount of collateral required to obtain a loan under the program varies depending on the individual circumstances of an institution. The amount of collateral required may be less for institutions that have maintained relatively large endowments and stable tuition revenue and more for institutions that have few or no physical properties to use as collateral, for example. Education officials further noted that requiring the value of collateral to be greater than the value of the loan was not an uncommon business practice. Overall, more than two-thirds of the participant schools and more than a third of the nonparticipants said they are interested in using the program but some said that their continued or future interest in the program would depend on its being modified. Several schools suggested the types of projects eligible for funding could be broadened, which might allow them to undertake capital projects that would, in turn, assist them in attracting and retaining additional students. Campus beautification projects and multipurpose community centers were cited as examples. In addition, they regarded new construction—for which program loans are available only under exceptional circumstances—as particularly important because new construction attracts more students and because renovations often incur unexpected costs. Nevertheless, many public HBCU school officials we spoke with said that in view of their states’ continuing fiscal constraints, they expect to consider the loan program as a future funding resource. While Education has taken limited steps to improve the program, we found significant weaknesses in management controls that compromise the extent to which Education can ensure program objectives are being achieved effectively and efficiently. Education has recently provided schools the choice of fixed or variable interest rates, allowed for larger loan amounts, and afforded more opportunity for schools to negotiate loan terms, which appealed to schools. In addition, Education has attempted to increase awareness of the program among HBCU officials through increased marketing of the program by the DBA. While Education has taken steps to improve the program, we found significant weaknesses in its management control with respect to its communications with HBCUs, compliance with program and financial reporting laws and guidance, and monitoring of its DBA. Since 2001, Education has taken some steps to improve the program— in some cases by allowing greater negotiation of certain loan terms and conditions. Department officials said that changes to the program were necessary to remain competitive with other programs and the private market. These flexible terms included a variable interest rate option and the opportunity to negotiate the amount of additional debt that a school can subsequently assume through other financing arrangements. In fact, since 2003, 4 of the 7 schools that have received loans have taken advantage of the variable interest rate. Regarding the department’s monitoring of their debt, officials at another school said that they were able to negotiate with the DBA the amount of additional debt they could assume—from $500,000 to $1 million—before they would have to notify the department. School officials said this change was important because it not only reduced their administrative burden but it also gave them additional leeway to pursue other capital financing. The program made greater use of loans for the sole purpose of refinancing existing debt since 2003. Two participants reported an estimated savings of at least $3.7 million by refinancing under the program. According to department officials, Education has also made greater use of the Secretary’s authority to originate loans exceeding $10 million and to make multiple loans to an institution, providing schools with more purchasing power. Program officials said that while the limit on the amount and number of loans that could be made was to prevent disproportionate use of the loan fund by larger and more affluent schools, it no longer reflected the reality of current costs for construction and renovation or the budgetary constraints facing many states. Additionally, program officials we spoke with said they had enhanced program marketing. For example, the DBA has developed a Web site describing the program and offering answers to frequently asked questions. In addition, officials reported attending the national and regional conferences for college executives shown in table 4, completing over 60 campus site visits and contacting other school officials by telephone. Program officials also reported that most schools received written correspondence or an e-mail to inform them of the program. By these efforts, all HBCUs have been contacted in 2005, according to DBA officials. They also said they timed these outreach efforts to correspond with schools’ annual budgetary and enrollment processes in order to prompt schools to think about potential capital projects that could fit the program. DBA officials said that their marketing approach for fiscal year 2006 would be the same as in the previous year. While Education has taken some steps to improve the HBCU loan program, we found significant weaknesses in its management control of the program with respect to its (1) communications with HBCUs, (2) compliance with program and financial reporting laws and guidance, and (3) monitoring of its DBA, as described below. Many HBCU officials we interviewed reported a lack of clear, timely, and useful information from Education and the DBA at various stages of the loan process, and said the need to pursue such information themselves had sometimes led to delays. While program materials represent the loan application as a 2- to 3-month process, about two-thirds of the loans made since January 2001 were not closed until 7 to 18 months after application. Officials from one school said that it had taken 6 to 7 months for the DBA to relay from Education a clarification as to whether its proposed project was eligible. Other schools reported that Education had not provided timely or clear information about the status of their loans. In some cases, schools reported that the lengthy loan process resulted in project delays and cost increases over the intervening time period. An official from one school told us that it remained unclear to him why his school was denied a loan. Education officials acknowledged that the loan process was lengthy for some borrowers and said its DBA had attempted to work with these borrowers to address problems with applications. School officials told us that in some cases the loan process could have been expedited had Education and the DBA made use of previous borrowers’ experiences to apprise them of problems that could affect their own applications—such as the fact that title searches can be especially time consuming and problematic for private HBCUs, some of which did not receive all property deeds from their founders when they were established in the 1800s. With regard to making loan payments, several officials we interviewed said that DBA officials had not provided information that was in sufficient detail. In one situation, officials from one school reported that school auditors had questioned the accuracy of the loan payment amount for which the school was billed by the DBA because the billing statements omitted information concerning the extent to which the amount billed included escrow payments. Other officials noted that they had not received written notification from the DBA concerning the full amount of their potential liability after funds had been withdrawn from the schools’ escrow accounts to cover payments on behalf of another borrower that had recently defaulted on a loan. Education has not complied with certain statutory requirements relating to the program’s operations and how federal agencies are to account for the government’s cost of federal loan programs. In creating the program, Congress established within the Department of Education an HBCU Capital Financing Advisory Board composed of the (1) Secretary of Education or the Secretary’s designee, (2) three members who are presidents of private HBCUs, (3) two members who are presidents of public HBCUs, (4) the President of the United Negro College Fund or his/her designee, (5) the President of the National Association for Equal Opportunity in Higher Education or his/her designee, and (6) the Executive Director of the White House Initiative on HBCUs. By law, the Advisory Board is to provide advice and counsel to the Secretary of Education and the DBA concerning the capital financing needs of HBCUs, how these needs can be met through the program, and what additional steps might be taken to improve the program. To carry out its mission, the law requires that the board meet with the Secretary of Education at least twice each year. Despite this requirement, the board has met only three times in the past 12 years, the most recent meeting occurring in May 2005. According to Education officials, the Advisory Board did not routinely meet because of turnover among Education staff as well as HBCU presidents designated to serve on the board. Education officials told us that there could have been other reasons why the Advisory Board did not meet in earlier years, but none that they had knowledge of. Although Education officials told us that they had believed another Advisory Board meeting would be convened soon after the May 2005 meeting, no such meeting has yet been scheduled. We also found that Education has not fully complied with requirements of the Federal Credit Reform Act of 1990, which, along with guidance issued by OMB and accounting standards, provide the framework that Education is to use in calculating the federal budget costs of the program. In particular, Education has excluded certain fees paid by HBCUs from its calculations of program costs. The interest payments made by HBCUs on program loans includes a surcharge of 1/8th of 1 percent assessed by FFB in accordance with its policy and as permitted by statutory provisions governing its transactions. Under the Federal Credit Reform Act of 1990, these fees—i.e., the surcharge—are to be recognized as cash flows to the government and included in agencies’ estimated costs of the credit programs they administer. In addition, these fees are to be credited to the program’s financing account. OMB officials responsible for coordinating agencies’ subsidy cost estimates acknowledged that Education should include the fees in its budgetary cost estimates and noted that other agencies with similar programs do so. Further, the written agreement among Education, the FFB, and the DBA that governs the issuance of bonds by the DBA for purchase by the FFB for the purpose of funding loans under the program also stipulates that these fees are to be credited to Education. Despite these provisions, Education has not included the fees in its calculations of the federal cost of the program, thereby overestimating the program’s costs; nor has Education accounted for the fees on its financial statements. Instead, the DBA has collected and held these fees in trust. Although the contract between Education and the DBA generally describes how the DBA is to manage the proceeds from and the payment of bonds issued to fund loans made to HBCUs, it does not specifically address how the DBA is to manage the payments that reflect the 1/8th of 1 percent paid by borrowers. In general, the DBA collects borrower repayments and remits the proceeds to the FFB to pay amounts due on the program’s outstanding bonds. However, the amounts paid to the FFB do not include the fees paid by borrowers. As a result, it is unclear how these funds, retained by the DBA, are to be eventually returned to the federal government. Moreover, Education has not monitored the DBA’s handling of these funds and is unaware of the accumulated balance. Although the current DBA has been under contract with Education for over 5 years, Education has not yet assessed its performance with respect to key program activities and contractual obligations, although Education officials said that they have been pleased with the DBA’s performance. One of these major activities is “marketing” the capital financing program among HBCUs in order to raise awareness and help ensure that the program is fully utilized. Although the DBA is required by its contract with Education to submit annual reports and audited financial statements to Education, it has not done so. While DBA officials told us the department has offered some informal assessments, Education officials have not guided their marketing efforts. Still, we found indications that the DBA’s marketing strategy has likely suffered from a lack of guidance and monitoring by Education. Officials we spoke with at 4 schools did not know of the program, and another eight told us they had learned about it from peers or advocacy organizations. Others were aware of the DBA’s marketing activities, but offered a number of suggestions for improvement, citing a need for more specific information as to the extent to which collateral would be needed, how the program meets the needs of both private and public schools, or examples and testimonials about funded projects. Several school officials said DBA outreach through conferences was not necessarily well targeted—either because the selected conferences covered a full range of topics for a variety of schools and not only HBCUs, or because they focused on issues relating to either public or private HBCUs, or because they drew school officials not involved in facilities planning. Additionally, the DBA has reserved its direct contact marketing largely for 4-year schools. DBA officials justified this decision on grounds that smaller schools tended to have more difficulty borrowing and that they had targeted larger schools that they believed would be most likely to benefit from the program. However, as prescribed by law, loans are to be fairly allocated among as many eligible institutions as possible. Because the DBA’s compensation is determined as a percentage of the amount borrowed, and the costs it incurs may not vary significantly from loan to loan, it is important to monitor its activities to ensure it is not making loans exclusively to schools that are likely to borrow larger amounts and for which its potential for profit is highest. With regard to the DBA’s basic responsibility for keeping records, we found several cases in which critical documents were missing from loan agreement files. Moreover, the DBA was unable to provide us with entirely complete files for any of the 14 institutions that had participated or were participating in the program. For example, documents that included loan applications, decision memoranda, financial statements, and real property titles were missing for several schools. In our file review, we found that files for 9 schools did not include the original application. Files for 8 schools did not include the required financial statements for demonstrating long-term financial stability, and 5 lacked DBA memoranda pertaining to the decision to make the loan. Moreover, until our review, key Education officials were unaware that such documents were missing. Officials from four HBCUs in the Gulf Region we spoke with (Dillard University, Southern University at New Orleans, Xavier University, and Tougaloo College) told us that, in light of the extensive hurricane damage to their campuses, they were pleased with the emergency loan provisions but concerned that the 1-year authorization would not provide sufficient time for them to take advantage of the special program features. School officials from each of the four schools noted that their institutions had incurred physical damages caused by water, wind, and, in the case of one institution, fire, and that the actual financial impact of the hurricanes may remain unknown for years. Although Education officials told us that they have not yet determined the extent to which the department would make use of its authority to waive or modify program provisions for hurricane- affected institutions, the department would be prepared to provide loans to hurricane-affected HBCUs. Officials from the three HBCUs we visited reported extensive damage to their campuses as a result of the 2005 hurricanes and noted that it may take another few years to determine the full financial impact. School officials told us that they have not been able to fully assess all hurricane- related costs, such as replacing property, repairing plumbing systems, landscaping, and replacing sidewalks, and as result, current estimates are only preliminary. School officials noted that the assessment process was lengthy because of, among other things, the time required to prioritize campus restoration needs, undertake complex assessments of historic properties, follow state assessment processes, and negotiate insurance settlements. Each of the four schools we contacted incurred physical damages caused by water and wind; one school also incurred damage by fire. For example, the campuses of all three schools in New Orleans were submerged in 2 to 11 feet of water for about a month after the hurricanes, damaging the first floors of many buildings as well as their contents. As a result, schools required removal of debris and hazardous waste (e.g., mold and asbestos), repair and renovation, and the taking of actions recommended by FEMA to mitigate future risks. Xavier University officials, who preliminarily estimated $40 million to $50 million in damage to their school, said that they faced the need to undertake several capital projects, including replacing elevators, repairing roofs, and rehabilitating the campus auditorium and replacing its contents. According to officials from Southern University at New Orleans, state officials have estimated damages at about $17 million; at the time of our visit 10 months after the hurricanes, state insurance assessors were beginning their work on the campus library, where mold reached all three floors, covering books, art collections, card catalogues, and computers. Officials at Dillard University also reported extensive damage, preliminarily estimated as high as $207 million. According to officials, five buildings—which were used for academic support services and residential facilities—had to be demolished because of extensive damage; three of these buildings were destroyed by fire. Further, they also reported that the international studies building, built adjacent to a canal levee in 2003, will have to be raised at least 18 feet to make it insurable. Officials at Tougaloo College, in Mississippi, reported wind and water damage to the roofs of some historic properties, which along with other damages, they preliminarily estimated at $2 million. Figures 2-4 show some of the damages and restoration under way at the three schools we visited. The school officials we spoke with found certain emergency provisions of the loan program favorable, but they expressed reservations about the time frame within which they are required to make application for the special loans. Most school officials appreciated the reduced interest rate and cost of issuance (both set at 1 percent or less) and that the Secretary of Education was provided discretion to waive or modify statutory or regulatory provisions, such as credit criteria, to better assist them with their recovery. They said the normal sources of information for credit evaluation—such as audited financial records from the last 5 years— would be difficult to produce. Other conditions of the emergency loan provisions some officials found favorable were the likelihood that loans would be awarded sooner—providing a timely infusion of funds—with more flexibility compared to other programs. Officials at both Dillard and Xavier Universities said that because their institutions had already spent a significant amount of their available resources, the emergency loans could be used to bridge any emerging financial difficulties they experience as they continue to pursue insurance settlements and assistance from other federal agencies, including FEMA and SBA. Additionally, some school officials said that the program may allow for greater flexibility compared to FEMA and SBA aid. For example, some officials told us that in addressing damages caused by the hurricanes they would like to improve upon their facilities to mitigate potential environmental damages in the future and, at one school, upgrade an obsolete science laboratory with state-of-the-art equipment. They said, however, that in some cases FEMA aid is limited to restoring campus facilities to their prestorm conditions and in other cases desired improvements might not be consistent with requirements for historic preservation. While most school officials we spoke with found select provisions favorable, they expressed concerns with stipulations that limit the extension of the special provisions to 1 year, primarily because all of the costs associated with damage from the hurricanes have not been fully identified. Further, officials at Southern University at New Orleans—a public institution—said that they are subject to an established capital improvement approval process involving both its board of directors and state government officials that alone normally requires a year to complete. Additionally, some of the schools are concerned that they may not be able to restore damaged and lost records needed to apply to the program. Officials reported that a time frame of at least 2 to 3 years would allow them to better assess the costs of the damages. Other concerns cited included eligibility requirements for the deferment provision, and officials from one institution expressed disappointment that the emergency provisions did not include some form of loan forgiveness. According to Education officials, they are preparing to take the steps necessary to ensure that the department is prepared to provide loans to hurricane-affected HBCUs. Education officials noted that in light of the statutory limit on the total amount of loans it can make under the program and the balance of loans outstanding as of August 2006, about $141 million in funding is available for private, and $66 million for public, HBCUs— both those affected by the hurricanes and others. The officials noted that the department had not yet determined to what extent the Secretary would use her discretion to waive or modify program requirements, including the statutory loan limits. They told us that some of their next steps included determining how the program’s application processes could be changed to ensure that funds can be provided to hurricane-affected schools in a timely manner. They said the department would need to consider to what extent it would apply credit criteria to hurricane-affected institutions in light of the fact that these institutions would likely be experiencing fiscal stresses as they seek to rebuild their campuses and attempt to return to their prior levels of enrollment. They noted that they would talk with school officials to gain a better understanding of which program criteria remain applicable, but anticipate using fewer credit criteria in their determinations. Education officials also noted that they will likely have to decide on the appropriate level of flexibility to exercise with respect to collateralizing loans for hurricane-affected HBCUs because some institutions may lack the collateral they had prior to the hurricanes. Moreover, these officials stated that the department would need to consider establishing limits on the types of projects for which it would provide funding to ensure that loans are not provided for capital projects for which other federal aid is available, such as that provided by FEMA. For example, program officials recognized that a significant cost of recovery for the schools in the Gulf Coast region is debris removal, but believe FEMA is likely to provide funding for such costs. Even with these challenges and outstanding questions, program officials said that they are confident the department will be able to lend funds to hurricane-affected institutions prior to expiration of the special legislative provisions applicable to hurricane-affected HBCUs. They noted that the department has already notified eligible institutions of the availability of funds and would hold additional meetings with schools to gain an understanding of their capital improvement and restoration needs. HBCUs play an important role in fulfilling the educational aspirations of African-Americans and others and in helping the nation attain equal opportunity in higher education. In establishing the Capital Financing Program, Congress sought to help HBCUs continue and expand their educational mission. The program has in fact assisted some HBCUs in financing their capital projects. Factors, however, including awareness of the program; clear, timely, and useful information concerning the status of loan applications and approvals; and certain loan terms and conditions, may be discouraging other schools from participating in the program. Some HBCUs have accessed even more attractive financing outside of the program, while yet others may face financial challenges that make it unwise to borrow through the program—factors that affect program utilization and make the development of program performance goals and measures challenging. Despite the challenge, Education is attempting to design performance goals and measures—a positive step that if successfully completed could be useful in informing Congress and others about the extent to which the program is meeting Congress’ vision in establishing it. HBCU officials had a number of suggestions, such as changing the frequency of schools’ loan repayments from a monthly to a semiannual basis, that they believed could improve the program and positively influence program utilization. By soliciting and considering such feedback from HBCU officials, Education could ensure that the program is optimally designed to achieve its objectives effectively and efficiently. However, Education has not made consistent use of the mechanism—the HBCU Capital Financing Advisory Board—Congress provided to help ensure Education received input from critical program stakeholders. Receiving feedback from schools would also allow the department to better inform Congress about the progress made under the program. Effective management control is essential to ensuring that programs achieve results and depends on, among other things, effective communication. Agencies must promote relevant, reliable, and timely communication to achieve their objectives and for program managers to ensure the effective and efficient use of resources. Effective management control also entails ensuring that an agency complies with applicable laws and regulations and that ongoing monitoring occurs during the normal course of an agency’s operations. In failing to follow the requirements of the Federal Credit Reform Act, Education has overstated the budgetary cost of the program. Accurately accounting for the cost of federal programs is all the more important in light of the fiscal challenges facing the nation. Moreover, failing to adequately monitor the DBA’s performance with respect to critical program responsibilities—record keeping, marketing, accounting, and safeguarding the federal funds it has been collecting from program borrowers—increases the program’s exposure to potential fraud, waste, abuse, and mismanagement. To better ensure that the HBCU Capital Financing Program can assist these schools to continue and expand their educational missions, GAO is making the following five recommendations for Executive Action. To ensure that it obtains the relevant, reliable, and timely communication that could help ensure that program objectives are being met efficiently and effectively, and to meet statutory requirements, we recommend that the Secretary of Education regularly convene and consult with the HBCU Advisory Board. Among other things, the Advisory Board could assist Education in its efforts to develop program performance goals and measures, thereby enabling the department and the board to advise Congress on the program’s progress. Additionally, Education and the Advisory Board could consider whether alternatives to the escrow arrangement are feasible that both address schools’ concerns and the need to keep federal costs at a minimum. If Education determines that statutory changes are needed to implement more effective alternatives, it should seek such changes from Congress. To ensure program effectiveness and efficiency, we recommend that the Secretary of Education enhance communication with HBCU program participants by (1) developing guidance for HBCUs, based on other schools’ experiences with the program, on steps that applicants can take to expedite loan processing and receipt of loan proceeds, and (2) regularly informing program applicants of the status of their loan applications and department decisions. In light of the program’s existing credit requirements for borrowers and the funds placed in escrow by borrowers to protect against loan delinquency and default, we recommend that the Secretary of Education change its requirement that borrowers make monthly payments to a semiannual payment requirement consistent with the DBA’s requirement to make semiannual payments to the FFB. To improve its estimates of the budgetary costs of the program, and to comply with the requirements of the Federal Credit Reform Act, we recommend that the Secretary of Education ensure that the program subsidy cost estimation process include as a cash flow to the government the surcharge assessed by the FFB and paid by HBCU borrowers and pay such amount to the program’s financing account. Additionally, we recommend that the Secretary of Education audit the funds held by the DBA generated by this surcharge and ensure the funds are returned to the Department of the Treasury and paid to the program’s financing account. To ensure adequate management control and efficient program operations, we recommend that the Secretary of Education increase its monitoring of the DBA to ensure its compliance with contractual requirements, including record keeping, and that the DBA is properly marketing the program to all potentially eligible HBCUs. In written comments on a draft of this report, Education agreed with our findings and all but one of our recommendations and noted that our report would help it enhance the program and better serve the nation’s HBCUs. Education agreed with our recommendation to regularly convene and consult with the HBCU Advisory Board and noted that the department would leverage the board’s knowledge and expertise to improve program operations and that the department had scheduled a board meeting for October 27, 2006. Education also agreed with our recommendation to improve communications with HBCUs, noting that it would take steps including developing guidance based on lessons learned to expedite loan processing and receipt of proceeds, and regularly informing applicants of their loan status and department decisions. Moreover, Education agreed with our recommendation to improve its budget estimates for the program, indicating that it would work with OMB and Treasury to do so. Further, with regard to our recommendation that the department increase its monitoring of its DBA, the department stated that it would require the DBA to submit quarterly reports on program participation and financing, identify and locate missing loan documentation, and maintain these efforts for each subsequent loan disbursal. Additionally, the department said that it was planning to conduct an audit of the DBA’s handling of loan funds and associated fees, as we recommended. With respect to our recommendation that would allow participating schools to make semiannual payments, Education said it would be imprudent to implement the recommendation at this time because of the potential for default as well as the exposure from a default by a current program participant. We considered these issues in the development of our recommendation and continue to believe that the credit evaluation performed by the DBA, the funds set aside by borrowers held in escrow, and the security pledged by borrowers provide important and sufficient measures to safeguard taxpayers against potential delinquencies and default. Further, while not noted in our draft report reviewed by the department, the law requires that borrowers make payments to the DBA at least 60 days prior to the date for which payment on the bonds is expected to be needed. In addition, borrowers have been required to submit, on an annual basis, audited financial reports and 3-year projections of income and expenses to the DBA. These measures provide additional safeguards as well as a mechanism to alert the department of potential problems. We added this information to our description of program terms and conditions in table 1. Education also provided technical comments that we incorporated into this report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of Education appropriate congressional committees, the Director of OMB, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. Ala. Ga. S.C. N.C. N.C. Fla. Md. Pa. Ga. La. Tenn. Fla. Tex. Ky. Pa. N.C. Ala. Va. N.C. Tex. N.C. S.C. La. Tenn. Tex. Miss. Ala. Ark. D.C. Md. Va. W.Va. La. ppendix II: Number of HBCUs Eligible to ticipate in Capital Financing Program by as of August 31, 2006) In addition to those named above the following individuals made important contributions to the report: Jeff Appel, Assistant Director; Tranchau Nguyen, Analyst-in-Charge; Carla Craddock; Holly Gerhart; Lauren Kennedy; Sue Bernstein; Margie Armen; Christine Bonham; Jessica Botsford; Michaela Brown; Richard Burkard; Carlos Diz; Kevin Jackson; Tom McCool.
Historically Black Colleges and Universities (HBCU), which number around 100, undertake capital projects to provide appropriate settings for learning, but many face challenges in doing so. In 1992, Congress created the HBCU Capital Financing Program to help HBCUs fund capital projects by offering loans with interest rates near the government's cost of borrowing. We reviewed the program by considering (1) HBCU capital project needs and program utilization, (2) program advantages compared to other sources of funds and schools' views on loan terms, (3) the Department of Education's (Education) program management, and (4) certain schools' perspectives on and Education's plan to implement loan provisions specifically authorized by Congress in June 2006 to assist in hurricane recovery efforts. To conduct our work, we reviewed applicable laws and program materials and interviewed officials from federal agencies and 34 HBCUs. HBCU officials we interviewed reported extensive and diverse capital project needs, yet just over half of available loan capital ($375 million) has ever been borrowed. About 23 HBCUs have taken steps to participate in the program, and 14 have become borrowers. Education has collected and reported limited data on the program's utilization and has not established performance measures or goals to gauge program effectiveness, though Education officials noted they are developing measures and goals. The HBCU loan program provides access to low-cost capital financing and flexibilities not always available elsewhere, but some loan terms and conditions discourage participation, though school officials said they remain interested in the program. The low interest rate and 30-year repayment period were regarded favorably by participants and nonparticipants alike, and the program makes funds available for a broader range of needs than some federal grant programs. However, the requirement to place in a pooled escrow 5 percent of loan proceeds--an insurance mechanism that reduces federal program costs due to any program borrower's potential delinquency or default--monthly payments versus semiannual ones traditionally available from private sources of loans, and the extent to which some loans have been collateralized could discourage participation. While Education has taken steps to improve the program, significant weaknesses in its management control could compromise the program's effectiveness and efficiency. Education has recently provided schools with both fixed and variable interest rate options, allowed for larger loans, and afforded more opportunities to negotiate loan terms. Also, Education has increased its marketing efforts for the program. However, Education has not established effective management control to ensure that it is (1) communicating with schools in a useful and timely manner, (2) complying with statutory requirements to meet twice each year with an advisory board composed of HBCU experts and properly account for the cost of the program, and (3) monitoring the performance of the program's contractor. Officials from 4 HBCUs in Louisiana and Mississippi told us that in light of the extensive 2005 hurricane damage to their campuses, they were pleased with certain emergency loan provisions but concerned that there would not be sufficient time to take advantage of Education's authority to waive or modify the program provisions. School officials from the 4 schools noted that their institutions had incurred extensive physical damage that was caused by water, wind, and, in one case, fire, and that the full financial impact of the hurricanes may remain unknown for years. Although Education officials told us that they have not yet determined the extent to which the authority under the emergency legislation to waive or modify program provisions for hurricane-affected institutions would be used, the department would be prepared to provide loans to hurricane-affected HBCUs.
Both social service agencies and the courts play an important role in addressing child welfare issues. In the District, CFSA, in conjunction with other agencies, provides important services to promote the safety and well being of children and families. CFSA coordinates public and private partnerships to preserve families and to protect children against abuse and neglect. The Family Court of the D.C. Superior Court has jurisdiction over child welfare cases. The Family Court judges oversee foster care and adoption cases and make decisions concerning the existence of maltreatment, the placement of children in state custody, and whether reasonable efforts have been made to preserve a family to avoid the need for foster care. Additionally, the court holds hearings to determine the appropriateness of the placement of a child in care, terminates parental rights, and finalizes adoptions. Effective child welfare systems have processes for collaborating and sharing information among the agencies that provide child welfare-related services to children and families, such as mental health services and substance abuse treatment. Like many other jurisdictions, the District has faced challenges in its ability to share information across agencies. In previous work, we reported that CFSA’s operations have been affected by the lack of integration of child welfare services with other support services. Additionally, it is important that the social service agencies and courts receive and share information they need on the children and families they serve. Caseworkers need to know from the court the status of a child’s case, when a hearing will take place, and a judge’s ruling. Family courts need case history information from caseworkers such as whether services have been provided, who is caring for the child, and if there has been evidence of abuse or neglect. However, CFSA and the District’s former Family Division of the Superior Court have had difficulty sustaining effective working relationships. As a result, cases moved slowly through the system, and decisions intended to improve the safety and well being of children and their families were delayed. In order to address some of the challenges the court and District agencies have faced, Congress passed the D.C. Family Court Act of 2001. The act reformed court practices and established procedures intended to improve interactions between the court and social service agencies in the District. Family court reform in the District includes several key components that require the direct involvement of CFSA and its social workers. One component involves revising case management practices through the implementation of the one family/one judge concept. Under this concept, the District’s Family Court plans to assign the same judge to all cases involving the same child and family, where practicable, feasible, and lawful. In addition, the Court has asked the Office of the Corporation Counsel to assign attorneys to particular judicial teams, comprised of a judge or magistrate judge. Judicial teams may also include social workers and parents’ attorneys, among other participants. Another key component of District Family Court reform will be on-site coordination of social services at the court. The Mayor must assign staff from several agencies to work on-site at the Family Court. These agencies include CFSA, District of Columbia Public Schools, the Housing Authority, Office of Corporation Counsel, the Metropolitan Police Department, and the Department of Health. In addition, the Mayor must appoint a liaison between the Family Court and the District government. The role of the family court liaison will be to coordinate the activities of CFSA’s social workers as well as representatives of other District social service agencies at the Family Court. The liaison is yet another key component of court reform in the District. The Family Court Act required the Chief Judge of the Superior Court to submit to Congress a transition plan discussing the transition to a Family Court. This plan was completed in April 2002, and in May 2002, we reported to Congress on this plan. Also, the act required the Mayor of the District of Columbia to submit a plan to Congress within 6 months of enactment of the Family Court Act, or on July 8, 2002, for the integration of District agency computer systems with those of the Family Court. Congress required us to prepare and submit within 30 days of the Mayor’s plan an analysis of the plan’s contents and effectiveness. On July 8, 2002, the Mayor issued the required plan: Supporting the Vision: Mayor’s Plan to Integrate the District of Columbia’s Social Services Information Systems with the Family Court of the D.C. Superior Court. The plan consists of two parts. Part I addresses the integration of social service agencies’ computer systems with the computer systems of the D.C. Family Court. This part of the plan focuses in large part on the District’s Safe Passages Information Suite (SPIS) initiative as a means of both integrating computer systems and social services within the District’s executive agencies as well as integrating these systems and services with those of the Family Court. We limited our review of SPIS to how it will be used to achieve integration with the Family Court. Part II of the plan describes the Mayor’s proposal for spending the $700,000 appropriated by the 2002 D.C. Appropriations Act—$200,000 for the completion of the plan to integrate computer systems and $500,000 to CFSA for social workers to implement family court reform. According to the Appropriations Act, these funds shall not be made available until the expiration of the 30-day period that begins on the date we submit our report to the Congress. However, because the 30-day period excludes weekends, holidays, and days the Congress is adjourned for a period of more than 3 days, these funds would not likely have been available until after the start of fiscal year 2003. On August 2, 2002, a supplemental appropriations act was passed specifying, among other things, that these funds shall remain available until September 30, 2003. Since the federal funds have not been released, District officials reported that they used local funds to prepare the computer system integration plan and to plan or, in some cases, complete family court reform activities. The District estimated that the implementation of the Mayor’s entire plan to integrate social services’ computer systems with those of the Family Court, including short-term and long-term initiatives, will cost $18 million and be completed in 4 years. The District has approximately $4 million reserved for SPIS. The Mayor’s plan contains useful information on intended efforts to integrate the District’s computer systems with those of the Family Court, including the planned use of appropriated funds, but it does not contain important elements and its effectiveness is contingent on the District’s ability to resolve critical issues and implement disciplined IT management processes. Information on these additional elements, how critical issues are to be addressed, and how information technology is to be managed, while not explicitly be required by the Family Court Act or the Fiscal Year 2002 D.C. Appropriations Act, would enhance the usefulness of the Mayor’s plan. As required by the Family Court Act, the Mayor’s July 8 plan provides information on integrating the computer systems of the District with those of the Family Court. According to the plan, the District identified the five integration priorities on the basis of its analysis of high-level requirements and best practice research. These integration priorities are (1) calendar management; (2) notification of the current status of cases, pending dates or deadlines and new events associated with cases, and case dispositions; (3) electronic document management of forms, reports, court orders, or any documents associated with a court case; (4) inquiry-level sharing of critical case information; and (5) reporting. Equally important, according to the D.C. Courts’ director, information technology division, the Family Court agrees with the integration priorities set forth in the Mayor’s plan. The Mayor’s plan also provides other useful information, such as a summary of the District’s current health and human services IT environment and its limitations, as well as descriptions of the types of information that various District offices need from the Family Court. Finally, the plan describes the Mayor’s approach for developing and implementing the SPIS initiative, which is central to the District’s achieving a long-term solution to both the integration of its health and human services systems with the Family Court’s systems and mitigating the current limitations of the District’s health and human services IT environment. Although the Mayor’s plan provides general descriptions of its current environment and its future plans, it does not include important elements, such as project milestones, that, while not explicitly required by the Family Court Act, or the fiscal year 2002 D.C. Appropriations Act, are critical to assessing the adequacy of the District’s strategy. District IT officials noted that they have not yet completed essential analyses, such as an analysis of requirements that would provide the basis for this additional information. Specifically, the plan does not include the following: Project milestones. Although the Mayor’s plan discusses a variety of short- and long-term integration strategies, it does not contain milestones for completing these activities. Without milestones, the Congress has neither the information necessary to assess whether the initiatives discussed in the plan can be realistically accomplished nor important criteria with which to measure the progress of the plan’s implementation. According to District IT officials, the deadline for submitting the Mayor’s plan to the Congress did not allow them enough time to develop milestones. The officials also said that they expect to develop a project plan that lays out the project components and milestones for the implementation of the Mayor’s plan by the end of the calendar year. Specification of integration requirements. The Family Court Act calls for the District to integrate its computer systems with those of the Family Court but does not define integration. The term “integration” can be defined in various ways and how it is defined can significantly affect how the system is designed and developed. Although the Mayor’s plan includes a set of integration principles, such as that integrated systems should improve information quality by eliminating redundant data entry, it does not include a definition of integration within the context of the Family Court Act. Defining integration for the SPIS project early in the planning process is critical because this definition will set the boundaries and help set expectations for the initiative and the individual projects that will make up this initiative. A District IT official agreed that developing an operational definition of integration is important and said that the District planned to establish one; however, the official did not know when this would be done. How the District will integrate the systems of the specific offices covered by the Family Court Act. The Family Court Act lists six District offices that the Mayor’s plan is to address regarding accessing and sharing information on individuals and families served by the Family Court: the D.C. Public Schools, the D.C. Housing Authority, CFSA, the Office of the Corporation Counsel, the Metropolitan Police Department, and the Department of Health. Although the Mayor’s plan includes a general discussion of the types of information that each of these entities needs from the Family Court, the integration strategies laid out in the plan did not always address the extent to which the information needs of each of these entities will be addressed. For example, the plan discusses a short-term integration strategy for achieving inquiry-level sharing of critical case information with CFSA, but not for the other offices listed in the Family Court Act. District IT officials agreed that the plan does not fully define how the systems of each of the offices identified in the Family Court Act will be integrated with the Family Court’s systems and said that the District is still in the process of analyzing these offices’ needs and defining requirements. However, these officials also noted that the plan discusses short-term integration strategies with CSFA’s FACES system, a system that provides CFSA with unified case management and reporting, which they expect will be a major system involved in integrating the Family Court’s and the District’s health and human systems. Details on the type of information the District will be providing to the Family Court and how this will be achieved. The Mayor’s plan includes a discussion of the Superior Court’s planned implementation of the Integrated Justice Information System (IJIS), which is intended to be the single point of integration for the District agencies’ interface with the courts. However, the plan does not specify the type of data that the District will be providing to IJIS or the District offices and systems that will be providing these data. Instead, the plan notes that the Superior Court will rely on its IJIS contractor to determine the detailed business requirements of the IJIS stakeholders, which includes the District offices. District IT officials explained that in developing the plan, they focused on what the District offices need from the Family Court, not what these offices needed to provide to the court. The officials said that time constraints prevented them from performing an in-depth review of what they need to provide to the Family Court, and they, therefore, did not include these requirements in the plan, but that the District is working closely with the courts to define these requirements. The D.C. Court’s director, information technology division, agreed that the Court and the District were working closely to define the interfaces between IJIS and the District’s systems and was complimentary about the level of cooperation from the District’s offices in performing this analysis. Finally, many of the solutions to achieving integration with the Family Court discussed in the plan are depicted only as proposals or options; thus, the plan is not always definitive about exactly how it will achieve the five integration priorities. For example, to achieve the integration priority of electronic document management, the Mayor’s plan lists four options that a cross-organizational team that is to be assembled is expected to evaluate. District IT officials said that the merits and details associated with these proposals and options will be further defined as part of the SPIS framework development project. However, until the District decides which, if any, of these proposals and options it will implement, the Congress will not have critical information with which to evaluate the feasibility and completeness of the District’s plan. The Mayor’s plan assumes that certain issues, such as ensuring the confidentiality of certain records and data quality, will be successfully resolved without explaining how this will be achieved. These issues are formidable, and the effectiveness and ultimate success of the Mayor’s plan will largely depend on the District’s ability to overcome them. Among the critical issues that must be successfully addressed to help ensure the effectiveness of the Mayor’s plan are the following: Confidentiality/privacy issues. As in other jurisdictions, laws and regulations govern the sharing of data in many District social services programs. For example, federal legislation relating to student educational records and mental and physical health informationprovides privacy protection for and limits access to such information. Upon reviewing the Mayor’s plan, the American Bar Association’s directors of child welfare and research noted that the District should address this critical issue as soon as possible to enable data integration to go forward. The Mayor’s plan recognizes the criticality of data confidentiality issues, but does not provide solutions or alternatives, although the plan indicates that there is a mayoral committee addressing the confidentiality restrictions affecting SPIS data sharing. If not resolved early in the planning stage, confidentiality issues are likely to significantly limit the functionality and flexibility of SPIS and consequently its integration with the Family Court system. Data quality issues. To be effective, systems must contain high-quality data (e.g., data that are accurate, complete, consistent, and timely). The importance of this issue is illustrated in our prior reports in which we have noted that data accuracy, completeness, and timeliness problems have hampered the District’s program management and operations.The Mayor’s plan recognizes problems with one significant element of data quality—ensuring consistency—and proposes developing common identifiers for persons receiving District services as a necessary, albeit difficult, step in integrating social service IT systems in the District. However, the plan does not address or propose remedies for known data accuracy and completeness problems that must be resolved to ensure the success of the District’s Family Court integration efforts. For example, according to the Mayor’s plan, the FACES system is paramount to the success of the SPIS initiative. However, our December 2000 report noted that this system lacked complete information, and according to CFSA’s Director, while the situation has improved, as of mid-June this problem still existed. Current legacy system limitations. According to the Mayor’s plan, the District has disparate information systems that are built on a number of different technology platforms with varying limitations. These limitations vary and include systems that (1) have limited functionality; (2) use old technology and require extensive work to maintain or upgrade them; and (3) do not have, or have limited, external interfaces (in some cases because of confidentiality concerns). Under the SPIS initiative, the District plans to use a commercial middleware tool along with data marts to synchronize case file attributes across systems and bridge multiple hardware and software system differences. Although this may be an appropriate strategy, the use of middleware and data marts would still require the District to address the limitations of its underlying legacy systems. For example, according to Gartner, Inc., a leading private research firm, while the use of middleware has advantages, there are legacy system issues, such as data inconsistency and synchronization and ownership issues that would still have to be addressed. Therefore, unless the District identifies and overcomes the limitations of these legacy systems, the functionality and performance of SPIS could be negatively affected. U.S. General Accounting Office, District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being, GAO-01-191 (Washington, D.C.: Dec. 29, 2000). IT workers remains high, and shortcomings in IT human capital management can have serious ramifications. The Mayor’s plan states that the District has a wide range of technological improvement priorities and that SPIS is just one of many strategic priorities. These priorities will require IT personnel with a myriad of skills, which may be acquired through a variety of approaches, including the use of contractors. Accordingly, acquiring, retaining, and effectively managing the right people with the right skills are key to the success of the District’s integration effort. Another key to the effectiveness of the Mayor’s plan is developing and using disciplined processes in keeping with IT management best practices. We and others have issued guides that discuss IT management practices used by leading organizations and frameworks for measuring an organization’s progress in implementing critical processes. These processes are especially important for projects such as SPIS, in which new ground is being broken. According to the District’s research, there are currently no examples of robust, two-way electronic information exchanges between social service agencies and court systems readily adaptable. The American Bar Association’s directors of child welfare and research also noted that they know of no robust examples of data exchanges between courts and child protection agencies. Such an uncertain and high-risk environment underscores the need to implement disciplined IT management practices to manage and mitigate risks. In addition to SPIS, using disciplined IT processes is important to the successful development of other new systems discussed in the Mayor’s plan that are either in the planning or development stages. For example, according to the plan, the Metropolitan Police Department is in the early planning stages for a reporting and information delivery system that it expects to implement in early 2004 that the District believes should be a target system for the Family Court in its integration planning. In the past, we have reported that the District has not implemented disciplined IT management processes and, as a result, the District has had difficulties developing, acquiring, and implementing new systems. To avoid similar problems with the SPIS project, the following are examples of IT management processes that are critical for the District to employ to help ensure that its investment is utilized wisely and results in a system that meets its objectives in a timely and cost-effective manner. Use of a life-cycle model. The District has not adopted a life-cycle model in developing SPIS that defines expectations for managing IT investments from conception, development, and deployment through maintenance and support. Life-cycle models require organizations to carefully manage risks such as an unrealistic schedule and budget expectations. Without such a model, processes for software development and acquisition will likely remain ad hoc and not adhere to generally accepted standards. Critical to the success of SPIS are the adoption of a life-cycle model and the development of a plan to institutionalize and enforce its use. According to an IT official, the District has drafted a life-cycle model that is being tested on other system development activities. Development of an enterprise architecture. The development and use of enterprise architectures is a best practice in IT management that leading public and private organizations follow. An enterprise architecture, which is a well-defined and enforced blueprint for operational and technological change, provides a clear and comprehensive picture of an entity or a functional or mission area that cuts across more than one organization—in this case, the child and family social services function. An enterprise architecture consists of three integrated components: a snapshot of the enterprise’s current operational and technological environment, a snapshot of its target environment, and a capital investment roadmap for transitioning from the current to the target environment. Our experience with federal agencies has shown that attempting a major modernization effort without a complete and enforceable enterprise architecture results in systems that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. According to an IT official, because of its complex environment, the District plans to develop an evolving enterprise architecture in components. This official further said that when the enterprise architecture will be completed would be based, in part, on available funding. Proceeding without this enterprise architecture, the District’s SPIS initiative would be at higher risk of not meeting its objectives. The risk associated with the District’s lack of an enterprise architecture is compounded by its plan to develop, in parallel, an SPIS framework and a pilot program. Specifically, the District plans to (1) develop an SPIS framework, which would include identifying and prioritizing agencies and business processes to be supported by SPIS, the design and documentation of the “to be” business environment, and the identification and sequencing of specific SPIS projects; and (2) pilot aspects of SPIS functionality at two District offices (the functions to be piloted have not yet been determined). Completing these projects in parallel is risky since the District would be designing, developing, and implementing systems before it has identified its current needs and developed a plan to achieve them. Use of adequate security measures. A basic management objective for any organization is to protect its data from unauthorized access and prevent improper modification, disclosure, or deletion of financial and sensitive information. Accordingly, implementing adequate security measures to achieve this objective is of paramount importance, particularly for projects such as SPIS that are expected to contain sensitive personal information. However, we have previously reported serious and pervasive computer security weaknesses in the District.The Mayor’s plan recognizes the importance of computer security in implementing the SPIS and sets forth seven strategies for ensuring a secure environment, such as limiting the number of authorized users and strong user training programs. The effective implementation of adequate security measures will be a critical factor in ensuring the success of the SPIS project. Finally, major IT investments should be supported by a well-developed business case that evaluates the expected returns against the costs. Our guidance on IT investment management calls for agencies to identify the expected costs and benefits of proposed investments. We are concerned about whether the District will perform this type of analysis. According to an IT official, the District is not planning to complete a formal cost/benefit analysis nor an analysis of alternatives in support of its Family Court integration strategy. Instead, the District plans to rely on professional judgment in assessing potential solutions within available resources. Moreover, with respect to analyzing alternatives, this official said that the District lacks staff resources and funding to conduct such an analysis. However, without an explicit understanding of the expected costs and benefits up front, the District lacks the basis for sound financial and strategic decisions and a baseline against which managers and executives can measure progress. Of the $700,000 appropriated for fiscal year 2002 in conjunction with the Family Court Act, $200,000 is designated in the Mayor’s spending plan to support the development of a plan integrating the computer systems of the District government with those of the Family Court. The spending plan identifies $158,000 of this $200,000 for the “development of the plan” and $42,000 for “implementation planning.” The $158,000 for the development of the Mayor’s computer integration plan has, according to the spending plan, provided for a project team consisting of District and contracted staff. The spending plan lists activities involved in the development of the plan; however, it does not associate costs with these activities. For example, the budget for the development of the plan has provided for a project team to perform activities such as the identification of the stakeholders and District agencies affected by the Family Court legislation, the completion of the technological gap analysis of District interactions with the Court, and the assessment of available technologies to enhance data integration, but there are no costs associated with these steps. The remaining $42,000 budgeted for implementation planning, according to the spending plan, is being reserved to perform certain other activities including preparation of cost estimates for components of the plan, prioritization of the components, and the development of an implementation time line. The activities are important steps in developing the plan for integrating the District and Family Court computer systems. The Family Court Act places several requirements on the Mayor. The act requires the Mayor, in consultation with the Chief Judge of the Superior Court, to ensure that representatives of the appropriate offices of the District of Columbia government that provide social services and other related services to individuals served by the Family Court are available on- site at the Family Court; to provide information to the Chief Judge of the Superior Court and to the Presiding Judge of the Family Court regarding the services of the District government that are available for the individuals and families served by the Family Court; and to appoint an individual to serve as a liaison between the Family Court and the District government for ensuring that the representatives of the appropriate offices are available on-site at the Family Court. Additionally, the Family Court Act urged that the District enter into a border agreement to facilitate the placement of children in the D.C. child welfare system in homes and facilities in Maryland and Virginia. The 2002 D.C. Appropriations Act provided $500,000 to the Mayor “for the Child and Family Services Agency to be used for social workers to implement Family Court reform.” The Mayor states that these appropriated funds will be used to support his responsibilities under the Family Court Act and identifies three categories for the use of the funds. The three categories are (1) liaison activities, (2) on-site coordination of services and information, and (3) border agreements. The plan indicates that the appropriated funds will be used as specified in table 1. The three categories listed in the Mayor’s plan and our analyses are as follows. Liaison Activities. The plan does not provide the details necessary to show how CFSA social workers will be involved in these activities, as required for these activities to be funded from the $500,000 designated by the D.C. Appropriations Act. For example, the plan lists staff time for preparation and presentation of magistrate judge training and upcoming training for family court personnel as a liaison activity. While Family Court officials said that this training involved CFSA social workers, the Mayor’s plan does not clearly state whether social workers will be involved, define the type of training, or describe how these expenditures will support CFSA social worker family court reform activities. On-Site Coordination of Services and Information. According to the Mayor’s plan the family court liaison will coordinate the activities of representatives from CFSA as well as representatives of other District social service agencies at the Family Court. However, the plan does not describe how the funded activities involve the use of social workers to implement family court reform. Furthermore, the Mayor’s plan provides limited information on issues essential to coordinating services. According to national court associations, an effective approach for establishing and sustaining operational integration among agencies includes (1) establishing interagency policies for coordinating on-site social services; (2) specifying the types of services to be provided by each participating agency; and (3) identifying the financial, human capital, computer, and other resources to support coordinated services. The Mayor’s plan provides limited information on these essential issues. The plan states that agency representatives will be available to the court and that computer support at the court will be provided. However, the plan does not describe planning efforts with the Family Court on related space and facilities requirements, costs associated with service coordination, the types of services that will be provided, or the number of staff that will be on-site. It does not indicate whether the CFSA staff on-site will include social workers. Family Court officials said that planning on-site services coordination with District offices is in its early phases and that service representatives from District offices will face challenges in identifying and coordinating social services for children and families served by the Family Court. Border Agreement. The Mayor plans to use $131,000 of the $500,000 designated in the D.C. Appropriations Act for border agreement activities, such as negotiating an agreement with surrounding jurisdictions. While a border agreement may benefit District efforts to achieve more timely placement of District children in Maryland and Virginia, border agreement activities included in the Mayor’s plan do not specify how CFSA social workers will be involved in the process or how their involvement relates to family court reform. Integrating the computer systems of District agencies with those of the Family Court as well as other aspects of family court reform are complex and will take years to complete. Much of the complexity stems from the critical issues upon which successful family court reform depends and the need for disciplined IT management processes to mitigate the risks posed by these issues. This complexity coupled with the multiyear completion timeframe makes planning the computer systems integration and other key elements of court reform difficult. In spite of the difficulty, the Mayor’s plan provides a useful overview of the District’s current health and human services IT environment, the current vision for integrating its health and human services computer systems with those of the Family Court, and how it intends to use funds that were appropriated for planning computer systems integration. However, the plan does not contain important details that, while not explicitly required by the Family Court Act or the fiscal year 2002 D.C. Appropriations Act, would enhance the usefulness of the plan by providing information that would facilitate an assessment of its feasibility and effectiveness. Information on project milestones, for example, could help the District and the Congress assess progress in implementing court reform and serve as an early warning system if a key milestone is not met. Furthermore, it is not clear in the plan how the $500,000 in appropriated funds are to be used for CFSA’s social workers to implement family court reform, as required by law. More details regarding the liaison, on-site coordination, and border agreement activities are needed to ensure that appropriated funds are used as Congress intended. To keep the Congress fully informed about the District’s progress in implementing court reform, we recommend that the Mayor periodically report to the Congress on the District’s progress in integrating its computer systems with those of the Family Court. These reports should provide milestones, including those associated with completing the essential analyses and addressing the critical issues and disciplined IT management practices discussed in this report, and the District’s progress in achieving them. To help ensure that the planned expenditures support the purpose designated in the D.C. Appropriations Act, we recommend that the Mayor provide more details to the Congress to show how the $500,000 will be used for social workers to implement family court reform. We received written comments on a draft of this report from the City Administrator of the District of Columbia. These comments are in appendix II. The City Administrator generally agreed with our findings related to the Mayor’s integration plan and offered to answer any further questions regarding the use of the $500,000 for CFSA for social workers to implement family court reform. However, the City Administrator did not directly address our recommendations. Regarding the Mayor’s integration plan, the City Administrator agreed that the successful execution of the plan is contingent on resolving critical issues and implementing disciplined processes. The administrator also said that the District is faced with daunting complexity in planning, designing, building, and implementing the capabilities described in the Mayor’s plan and recognized that it must exercise responsible planning for resources by conducting detailed planning and financial analyses of proposed information system improvements. Accordingly, the City Administrator reported that during the next 6 months the District plans to complete more detailed scope definitions, specification of integration requirements, timelines and milestones, and cost analyses of the planned integration activities. As for the plans to spend the $500,000, the City Administrator provided information that better explains how some of the activities will involve CFSA’s social workers. However, there are still some activities for which more detail is needed. For example, the comments note that the Mayor began cross-agency planning for coordination of services and information and list various related activities. Two of these activities appear to directly involve social workers—(1) training for CFSA social workers, OCC attorneys and others, and (2) analysis of cases to be transferred to the family court by CFSA social workers. However, it is still unclear the extent that the other activities—development of the CFSA-OCC pilot and changes to the CFSA court liaison functions—will involve CFSA social workers. The City Administrator also discussed a CFSA and Family Court pilot project designed to assess whether a particular approach to case assignment would shorten the road to permanency for children. This activity was not included in the Mayor’s plan. As for the border agreement, the comments address three activities included in the Mayor’s plan—negotiating the agreement, staffing, and implementing the agreement. Although the City Administrator stated that senior staff from the agency continues to be personally involved in the negotiations with Maryland officials, the comments do not indicate whether or how social workers are involved. It would appear that this activity does not directly involve social workers. Furthermore, according to the comments, the District agreed to fund two positions in Maryland, including one social worker. The City Administrator does not state the nature of the other position nor does he state that the social worker will be a CFSA social worker. However, the comments note that the costs of implementing the agreement will include funds to expedite licensing of CFSA social workers in Maryland. Because the City Administrator did not specifically address our recommendations in his comments, we continue to think it is important that the District keep the Congress informed of its progress in integrating its computer systems with those of the Family Court and that the Mayor provide more detail to show how the appropriated funds will be used for social workers to implement family court reform. We are sending copies of this report to the Office of Management and Budget, the Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; and the Subcommittee on the District of Columbia, House Committee on Government Reform. We are also sending copies to the Mayor of the District of Columbia; the Deputy Mayor for Children, Youth, Families, and Elders; the Chief Technology Officer; the Director of the Child and Family Services Agency; the Chief Judge of the Family Court of the District of Columbia Superior Court; and other District agencies. Copies of this report will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512-8403. Other contacts and staff acknowledgments are listed in appendix III. To assess the contents and effectiveness of the District of Columbia Mayor’s plan to integrate the computer systems of District agencies with those of the D.C. Family Court, we reviewed and analyzed the Mayor’s plan. As part of this analysis, we (1) reviewed the requirements for the plan set forth in the Family Court Act and the fiscal year 2002 D.C. Appropriations Act; (2) reviewed our prior reports and IT management best practice guidance; and (3) interviewed appropriate District IT officials, including the Chief Technology Officer and programmatic officials, such as the Deputy Mayor for Children, Youth, Families and Elders. We also interviewed the Director of the D.C. Courts’ information technology division and reviewed documents related to the court’s system development effort, the Integrated Justice Information System. In addition, we obtained comments on the Mayor’s plan from officials of the American Bar Association and the Virginia Supreme Court’s court improvement program. To analyze the Mayor’s spending plans for integrating computer systems and supporting Child and Family Services Agency (CFSA) social workers’ efforts to implement family court reform, we (1) reviewed the District’s spending plans, (2) interviewed and obtained information from officials in the District’s Office of Chief Technology Officer and CFSA, and (3) reviewed legislation related to the $700,000 in federal funds provided in the District’s Appropriations Act for fiscal year 2002. We did not independently verify or audit the cost information provided from District officials. We also interviewed program officials from the Child and Family Services Agency; the Departments of Human Services, Mental Health, and Health; the Office of Corporation Counsel; the Office of the Chief Technology Officer, the Mayor’s office; and District of Columbia Public Schools. In addition, we interviewed court experts in the National Council of Juvenile and Family Court Judges, American Bar Association, Council for Court Excellence, and officials from two other states, New Jersey and Virginia, that have undertaken efforts to integrate computer systems of courts with social services. We also examined documents related to policies of several District social service agencies and the District’s Family Court. The following individuals also made important contributions to this report: Patrick diBattista, Linda Elmore, Maxine Hattery, Linda Lambert, James Rebbe, Norma Samuel, Rachel Seid, and Marcia Washington. Human Services: Federal Approval and Funding Processes for States’ Information Systems. GAO-02-347T. Washington, D.C.: July 9, 2002. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. D.C. Family Court: Progress Made Toward Planned Transition and Interagency Coordination, but Some Challenges Remain. GAO-02-797T. Washington, D.C.: June 5, 2002. D.C. Family Court: Additional Actions Should Be Taken to Fully Implement Its Transition. GAO-02-584. Washington, D.C.: May 6, 2002 D.C. Family Court: Progress Made Toward Planned Transition, but Some Challenges Remain. GAO-02-660T. Washington, D.C.: April 24, 2002. D.C. Courts: Disciplined Processes Critical to Successful System Acquisition. GAO-02-316. Washington, D.C.: February 28, 2002. District of Columbia: Weaknesses in Financial Management System Implementation. GAO-01-489. Washington, D.C.: April 30, 2001. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Foster Care: Status of the District of Columbia’s Child Welfare System Reform Efforts. GAO/T-HEHS-00-109. Washington, D.C.: May 5, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. District of Columbia: The District Has Not Adequately Planned for and Managed Its New Personnel and Payroll System. GAO/AIMD-00-19. Washington, D.C.: December 17, 1999. District of Columbia: Software Acquisition Processes for A New Financial Management System. GAO/AIMD-98-88. Washington, D.C.: April 30, 1998.
Congress passed the D.C. Family Court Act of 2001 to reform court practices and establish procedures to improve interactions between the District of Columbia's Family Court, of D.C. Superior Court, and social service agencies in the District. The act directed the Mayor to prepare a plan to integrate the computer systems of District agencies with those of the Court. The fiscal year 2002 D.C. Appropriations Act authorized $200,000 for integrating the computer systems and $500,000 for social workers to implement family court reform. The act also required the Mayor to prepare a plan for these funds and mandated that the plan be issued on July 8, 2002. The Mayor's plan provides such useful information as (1) an outline of the District's current health and human services information technology environment and its information needs and limitations regarding the Family Court, (2) planned and possible short- and long-term initiatives to integrate the District's computer systems with those of the Family Court, (3) five technological integration priorities, and (4) how the $200,000 in appropriated funds will be spent. However, the District has not yet completed essential analyses, such as a requirements analysis, that would provide the basis for additional information. Based on GAO's analysis of the Mayor's plan for using the $500,000 in appropriated funds, it is not clear how the funds are to be used for social workers to implement family court reform, as required by law. The plan discusses the District's use of funds for service liaison, on-site coordination, and border agreement activities, but provides no detail on whether and how these activities involve social workers.
DHS’s mission is to lead the unified national effort to secure the United States by preventing and deterring terrorist attacks and protecting against and responding to threats and hazards to the nation. As part of that mission, DHS is responsible for ensuring that the nation’s borders are safe and secure, that they welcome lawful immigrants and visitors, and that they promote the free flow of commerce. Within the department, CBP is responsible for customs, immigration, and agricultural processing at ports of entry. ICE is responsible for the investigation and enforcement of border control, customs, and immigration laws. TECS is an information technology (IT) and data management system that supports DHS’s core border enforcement mission. According to CBP, it is one of the largest, most important law enforcement systems currently in use, and is the primary system available to CBP officers and agents from other departments for use in determining the admissibility of persons wishing to enter the country. In addition, it provides an investigative case management function for activities carried out by ICE agents, including money-laundering tracking and reporting; telephone data analysis; and intelligence reporting and dissemination. Over time, TECS has evolved into a multifaceted computing platform that CBP describes as a system of systems. This mainframe-based system interfaces with over 80 systems from within DHS, and federal departments and their component agencies, as well as state, local, and foreign governments. It contains over 350 database tables, queries and reports (e.g., querying law enforcement records to determine if a traveler appears on a terrorist watch list), and multiple applications (e.g., ICE’s existing investigative case management system). CBP agents and other users access TECS via dedicated terminals. The system is managed by CBP’s Office of Passenger Systems Program Office and is currently hosted at CBP’s datacenter. By 2015, CBP estimates that TECS will contain over 1.1 terabytes of data, including over 46 million lookout records—nearly 25 million records relating to the travel documents of permanent residents and refugees, and the border-crossing history for close to a billion travelers. On a daily basis, the system is used by over 70,000 users and handles more than 2 million transactions—including the screening of over 900,000 visitors and approximately 465,000 vehicles every day. In addition, federal, state, local, and international law enforcement entities use TECS to create and disseminate alerts and other law enforcement information about “persons of interest.” Ten federal departments and their numerous component agencies access the system to perform a part of their missions. Figure 1 shows the federal departments and component agencies that use TECS. Appendix III contains a description of the key systems and data resident on the existing (legacy) platform. The current TECS system uses obsolete technology, which combined with growing mission requirements, have posed operational challenges for CBP and others. For example, users may need to access and navigate among several different systems to investigate, resolve, and document an encounter with a passenger. In addition, CBP identified that TECS’ search algorithms do not adequately match names from foreign alphabets. TECS’ obsolescence also makes it difficult and expensive to maintain and support. For example, DHS estimates that TECS’s licensing and maintenance costs are expected to be $40 million to $60 million per year in 2015. In 2008, DHS initiated efforts to modernize TECS capability by replacing the mainframe technology, developing new applications and enhancing existing applications to address expanding traveler screening mission needs, improving data integration to provide enhanced search and case management capabilities, and improving user interface and data access. DHS plans to migrate away from the existing TECS mainframe by September 2015 to avoid significantly escalating support costs. The modernization effort is managed by two program offices—CBP and ICE—working in parallel, with each having assumed responsibility for modernizing the parts of the system aligned with their respective missions. CBP’s modernization program office organizationally resides within its Office of Information and Technology’s Passenger Systems Program Office. This office is responsible for systems that support DHS’s and CBP’s screening and processing of travelers at U.S. ports of entry, including TECS. ICE’s TECS modernization program office resides within ICE’s Office of Chief Information Officer. It is responsible for modernizing ICE’s IT systems, adapting and conforming to modern IT management disciplines, and providing IT solutions throughout ICE. Homeland Security Investigations Executive Steering Committee provides oversight to the ICE TECS Mod program, including approval and prioritization of requirements, functionality, and decisions on cost, schedule, and performance. As of July 2013, CBP’s program office consisted of approximately 80 staff, split roughly evenly between government and contractor staff, and ICE’s program office consisted of about 74 staff—of which 19 are government and 55 are contractors. In June 2008, CBP awarded a 1-year development contract for its modernization program. From 2009 to 2012, CBP continued its relationship with the same contractor, but awarded a different contract for development services across a range of CBP IT programs. This contract was managed by the Passenger Systems Program Office. The development contractor is to provide, among other things, requirements analysis, system development, and testing; system component migration from development to testing and subsequently to production; operation and maintenance; technical reviews participation; and the development of related documentation as needed. CBP exercised its options on this contract from 2009 to 2012. In January 2013, CBP had issued a new contract for development services, but canceled the award shortly thereafter to make revisions. CBP officials said that the program continues to move forward with plans to complete the development contract award and plans to award the new contract during the fall of 2013. Until then, CBP is continuing to work with the existing contractor. In addition, CBP also contracted separately with other vendors for computer hardware (e.g., servers), as well as for program management support, financial support services, and communications services. The ICE program office’s contracting strategy includes the government as the primary integrator of multiple contractors. ICE awarded its development contract in September 2011. The contract was a 1-year contract with four 1-year option years. The development contractor is to provide, among other things, software design and development services, testing services, information security controls, and technical support. In addition, ICE has established separate contracts for training, data migration, and program management support. DHS’s Office of the Chief Information Officer (CIO) and the Office of the Under Secretary for Management play key roles in overseeing major acquisition programs like TECS Mod. For example, the CIO’s responsibilities include setting departmental IT policies, processes, and standards; and ensuring that IT acquisitions comply with DHS IT management processes, technical requirements, and approved enterprise architecture, among other things. Within the Office of the CIO, the Enterprise Business Management Office has been given primary responsibility for ensuring that the department’s IT investments align with its missions and objectives. As part of its responsibilities, this office periodically assesses IT investments to gauge how well they are performing through a review of program risk, human capital, cost and schedule, and requirements. In October 2011, DHS’s Under Secretary for Management established the Office of Program Accountability and Risk Management (PARM). The office is to ensure the effectiveness of the overall program execution governance process and has the responsibility for developing and maintaining DHS’s Acquisition Management Directive. It is also responsible for providing independent assessments of major investment programs—called a Quarterly Program Accountability Report, and identifying emerging risks and issues that DHS needs to address. In December 2011, DHS introduced a new initiative to improve and streamline the department’s IT program governance. This initiative established a tiered governance structure for program execution. Among other things, this new structure includes a series of governance bodies, each chartered with specific decision responsibilities for each major investment. Among these are executive steering committees, which serve as the primary decision-making authorities for DHS’s major acquisition programs. The steering committees, which are generally chaired by officials from the DHS agency responsible for the acquisition, are responsible for providing guidance to program management offices, approving program milestone documentation, and making important program execution decisions, as requested by the program manager and/or key stakeholders. In September 2011, ICE chartered an executive steering committee responsible for overseeing its TECS modernization program. ICE’s committee is chaired by the Deputy Associate Director of Homeland Security Investigations and includes voting representation from CBP, as well as other stakeholders. Members include DHS’s CIO and Chief Financial Officer, stakeholder groups (such as U.S Citizenship and Immigration Services), and CBP’s TECS Mod Program Manager. The steering committee has been meeting since December 2011. In early 2013, CBP developed an executive steering committee with responsibility for overseeing its TECS modernization effort. It held its first governance meeting in February 2013 and is chaired by CBP’s Assistant Commissioner, Office of Information and Technology, and Chief Information Officer/Lead Technical Authority. Members include DHS’s Under Secretary for Science and Technology, CIO, and Chief Financial Officer; as well as representatives from stakeholder groups and ICE’s TECS Mod Program Manager. Figure 2 shows the relationships between the oversight and governance boards involved with the two programs. We have previously reported on DHS’s management of its major investments generally, and the management and development of TECS modernization specifically. In July 2012, we reported that DHS had introduced a new IT governance framework that was generally consistent with recent Office of Management and Budget guidance and with best practices for managing projects and portfolios identified in our IT Investment Management framework. Specifically, of the nine practices in the framework, we found that the department’s new governance framework partially addressed two and fully addressed seven others. For example, consistent with Office of Management and Budget guidance calling for the CIO to play a significant role in overseeing programs, DHS’s draft procedures required that lower-level boards overseeing IT programs include the DHS CIO, a component CIO, or a designated executive representative from a CIO office. In addition, consistent with practices identified in the framework, DHS’s draft procedures identified key performance indicators for gauging portfolio performance. However, it had not completed its policies and procedures, because, according to department officials, the focus had been on piloting the new governance process. We recommended that DHS finalize associated policies and procedures, and fully follow best practices for implementing the process. DHS concurred with our recommendations. Regarding the performance of the TECS modernization effort, in a September 2012 report, we noted that CBP’s program encountered delays because program officials needed to develop new requirements to accommodate users’ requests to interface with an additional system. Additional delays were caused by questions about whether the system duplicated functions performed by another agency system. We recommended that DHS guidance address shortcomings and develop corrective actions for all major IT investment projects having cost and schedule shortfalls, including TECS Mod. DHS agreed with our recommendation. In September 2011, we reported that CBP modernization program officials reported delays in completing required program documentation due, in part, to their not understanding the approval processes at the department level. We further noted that, although the program had recently been reviewed and approved by the DHS acquisition review board, CBP’s program office had not completed the required acquisition plan the board typically uses to evaluate system effectiveness and alignment with the agency’s mission. In addition, the program had not yet completed privacy impact assessments that covered the entire program. We recommended that DHS address these shortfalls and the department concurred. CBP has defined the scope for its modernization program, but its schedule and cost continue to change and are being revised. Further, ICE is overhauling the scope, schedule, and cost of its program after discovering that its initial solution is not technically viable. Thus, it is unclear whether these programs are on track to deliver planned functionality by September 2015. CBP has defined the scope of its program to include the replacement of its aging current mainframe-based platform with a mixture of hardware, and custom-developed and commercial software. Further, CBP plans to move data from the legacy TECS system to databases hosted at DHS’s data centers and to use DHS’s network infrastructure. CBP expects that its modernization efforts will yield certain improvements over the existing system, including the following. Enhancements to TECS’ search algorithms to better match names from foreign alphabets; address gaps in current processes that could result in missing a person of interest. This includes an improved ability for inspectors to update information on travelers at air and sea borders at the time of encounter. Improvements in the flow and integration of data between CBP and its partner agencies and organizations. This is intended to aid the agency’s inspectors by providing timely, complete, and accurate information about a traveler during the secondary inspection process. CBP planned to develop, deploy, and implement these capabilities incrementally across five projects from 2008 to 2015. Secondary Inspection: This project is to support processing of travelers referred from primary inspection for either enforcement or administrative reasons. According to CBP, this project’s functionality was fully deployed to all air and sea ports of entry in 2011, and was fully deployed to all land ports of entry in 2013. High Performance Primary Query and Manifest Processing: This project is intended to improve TECS data search results in order to expedite the processing of manifests from individuals traveling to the United States on commercial or private aircraft, and commercial vessels. It is to be fully operational by March 2015. Travel Document and Encounter Data: This project is intended to improve CBP’s ability to query and validate travel documentation for both passengers and their means of conveyance. It is to be fully operational by March 2015. Lookout Record Data and Services: This project is intended to improve the efficiency of existing data screening and analyses capabilities. It is to be fully operational by March 2015. Primary Inspection Processes: This project is intended to modernize the overall inspection process and provide support for additional or random screening and communication functions. It is to be fully operational by March 2015. As part of each of these projects, CBP is also developing an online access portal, called TECS Portal, for authorized users to access information remotely using a modern web- browser, along with security and infrastructure improvements, and the migration of data from the current system to databases in the new environment at the DHS datacenter. Ultimately, TECS Mod functionality is to be deployed to over 340 ports of entry across the United States. To date, Secondary Inspection is operational approximately 6 months earlier than was estimated in the program’s 2012 acquisition program baseline. In addition, CBP reports that a portion of the High Performance Primary Query and Manifest Processing project is also operational. The remaining projects are all scheduled to be operational by March 2015. Appendix IV provides additional information about these projects. However, the program is revising its schedule and cost baselines, making its remaining commitments uncertain. Specifically, the program is revising its acquisition program baseline for a second time in under a year. In particular, CBP revised the program’s initial acquisition program baseline in November 2012, establishing new commitments for the program’s cost and schedule for each of the projects, as well as the program overall. According to program officials in June 2013, CBP is in the process of again revising its program baseline, and plans to do so by September 2013. Officials explained that this time, CBP is revising its commitments to reflect actual cost and schedule data gathered since its last revision. The completion dates for each of CBP’s five projects have changed over time. Specifically, four of the projects are scheduled to be delivered later than originally planned and one project—Primary Inspection Processes— is scheduled to be completed ahead of the initial schedule. For example, according to the October 2009 program plan, Secondary Inspection was to be operational in September 2012. That operational date was then modified in the program’s acquisition program baseline (which was approved in October 2010) to be June 2013 (9 months later than originally scheduled). Then, in May 2011, CBP notified DHS that it was going to miss the deadline for several of its schedule milestones, including Secondary Inspection. As a result, CBP revised its schedule baseline for TECS Mod in November 2012; the new operational date for the project was to be March 2014. That date was reiterated in the CBP- ICE Joint Integration Process document, signed by CBP and ICE program management, upon its release in April 2013. However, shortly thereafter, the program again revised the operational date to be September 2013. Figure 3 illustrates the changes to CBP’s schedules for the five projects over time. Exacerbating the rebaselining and the schedule changes over time is the fact that CBP has not fully developed its master schedule to manage work activities and to monitor the program’s progress. Our research has identified, among other things, that a key element associated with a complete and useful schedule or roadmap for executing a program such as TECS Mod is to logically sequence all work activities so that start and finish dates of future activities, as well as key events based on the status of completed and in-progress activities, can be reliably forecast. While the program office has developed high-level schedules for each of its projects, officials explained that the program has not fully defined and documented all the linkages between work activities within the individual project schedules, nor have they defined dependencies that exist between projects in the master schedule. The program’s master schedule provided to us in May 2013 showed that approximately 65 percent of CBP’s remaining work activities were not linked with other associated work activities. Without these linkages, activities that slip early in the schedule do not transmit delays to activities that should depend on them, and a critical path cannot be determined, which means that management is unable to determine how a slip in the completion date of a particular task may affect the overall project schedule. Moreover, as of June 2013, the program had not yet developed a detailed schedule for the last project, Primary Inspection Processes, nor had it completed a detailed schedule for parts of the second project, High Performance Primary Query. Instead of managing from a fully developed master schedule, officials explained that they manage the program according to the milestones in the program’s acquisition program baseline, and do so by sharing information about project and program dependencies at meetings between project teams. However, the lack of a complete schedule raises questions about the validity of the milestones in its acquisition program baseline, and the certainty of the program’s schedule commitments. Furthermore, the program’s current schedule assumes the concurrent delivery of four of the five projects. As we have previously reported, the concurrent development of system components (e.g., the five TECS modernization projects) introduces risks that could adversely impact program cost and schedule. Such risks include the contention for limited resources. For the CBP TECS modernization program, these risks may be realized. In particular, program officials told us that development work for the Primary Inspection Processes was halted because of anticipated funding shortfalls due to sequestration. However, when the funding shortfalls were not realized, the program was unable to initiate Primary Inspection Processes development because, according to the Passenger Systems Program Office Executive Director, the program’s contractor resources had been diverted to other projects and shifting those resources back to Primary Inspection Processes would affect work on these projects. The Executive Director further stated that if work on Primary Inspection did not begin by January 2014, the program would not meet its operational date of September 2015. Program officials said that reasons for the schedule weaknesses include a lack of appropriate and skilled resources. Specifically, program officials stated that the program has only two staff members with skills needed to develop and maintain the schedules, and that fully documenting all the dependencies would be time consuming and not worth the effort because in their view, the limitations in the integrated master schedule were not sufficient to warrant the additional resources that would be necessary to fix them. However, without a complete and integrated master schedule that includes all program work activities and associated dependencies, CBP is not in a position to accurately determine the amount of time required to complete its TECS modernization effort and develop realistic milestones. Moreover, the program does not have a basis for guiding the projects’ execution and measuring progress, thus increasing the agency’s risk of not meeting the program’s completion dates. Similar to TECS Mod’s schedule milestones, the program’s cost estimates have also changed as a result of rebaselining, and are currently being revised. The program’s current baselined life-cycle cost estimate is approximately $724 million, including $31 million for planning management, $212 million for development, and $481 million for operations and maintenance. However, as previously stated, the program is in the process of revising its estimate. As of August 2013, the program reported that it had expended about $226 million—approximately $170 million for planning/program management and development/acquisition, and $56 million for operations and maintenance. While officials reaffirmed their intention to complete the whole program by the 2015 deadline, the program faces the risk of not doing so because its specific schedule milestones for each of the projects are based on incomplete schedule information and concurrency among the projects has resulted in competition for the same contracting resources. Moreover, while the program is pending rebaselining, it is unclear when the program actually intends to deliver functionality, or how much it will cost to do so. ICE initially defined the scope of its TECS modernization effort to include specific law enforcement and criminal justice information functions; tools to support ICE officers’ collection of information, data analysis, and management operations; enhanced capabilities to access and create data linkages with information resources from elsewhere in DHS and other law enforcement agencies; and capabilities to better enable investigative and intelligence operations, corresponding management activities, and information sharing. Further, ICE established plans to deliver functionality in two phases, Core Case Management and Comprehensive Case Management, each of which was to contain several releases. Specifically: Phase 1: Core Case Management: This phase was to encompass all case management functions currently included in the existing system. ICE planned to develop and deploy these functions in three releases beginning in 2009, and was scheduled to deploy Release 1 by December 2013, with additional releases following about every 12 months, in order to achieve independence from the existing TECS platform by September 2015. Specific capabilities that were to be provided include: basic electronic case management functions, including opening cases, performing supervisory review of cases, and closing cases within the system; development of reports for use as evidentiary material in court proceedings arising from ICE agents’ investigations; maintenance of records relating to the subjects of ICE investigations; and audit capabilities to monitor system usage. Phase 2: Comprehensive Case Management: This phase was to expand on the features delivered as part of phase one and to be delivered in four increments starting in 2016, with an estimated completion date in fiscal year 2017. Regarding costs, ICE’s baselined life-cycle cost estimate is approximately $818 million, including about $17 million for planning, roughly $328 million for development and acquisition, and approximately $473 million for operations and maintenance. However, in 2012 the program began to experience technical issues, which resulted in a delay of approximately 7 months and the deferral or removal of functionality from Release 1. Specifically, ICE decided in 2012 that Release 1 would only provide functionality for the “person” type of subject records; all the other types of subject records have been deferred to future releases. Then, in October 2012, the agency conducted a review of the program’s remaining work for Release 1 to determine whether, in light of increasing rates of program defects and a slowdown in the program’s overall progress, ICE was positioned to deliver Release 1 as planned. Based on the review results, the agency deferred or eliminated approximately 3,000 out of the 4,300 in-scope requirements (about 70 percent of the original total) in order for the program to meet its planned schedule commitments. Such functionality includes the capability to perform supervisory review of cases and certain electronic notifications and alerts. Faced with continuing technical issues and related delays, ICE’s program manager said that the program initiated a second program review in January 2013 at the direction of its executive steering committee and with participation from the program office, the contractor, and Homeland Security Investigations. Based on the review results, the program office determined in June 2013 that the system under development was not technically viable and would not be fielded as part of ICE’s final solution due to ongoing technical difficulties relative to the user interface, access controls, and case-related data management. Instead of continuing with the current technical solution, the program manager explained that, after having spent approximately $19 million in acquisition/development costs on the original solution, the program would seek alternatives and start over. The program manager said ICE is now assessing such alternatives, including a revised technical approach offered by the current development contractor, as well as other off-the-shelf solutions in use at other agencies. According to the program manager, significant portions of the previous solution’s data migration and infrastructure-related components might be salvaged for reuse in whatever new solution is chosen. But, depending on the approach selected, most of the user interfaces, security components, and business rules that have been developed for the program to date are unlikely to be reused. The program manager stated that the program intends to decide which course it will pursue by October 2013, and based on that decision, it will update the program’s life-cycle cost estimate, schedule, and requirements documents (as needed). Further, he stated that ICE intends to proactively revise its May 2011 acquisition program baseline before it breaches at the end of December 2013 and reaffirmed the agency’s intention to deploy a solution by the 2015 deadline. In the meantime, according to the program manager, ICE has largely halted development work and it will be January 2014 at the earliest before any new development work begins. Given the time lost in developing the current technical solution, as well as the already reduced program scope, ICE cannot say what specific features it will release to users, what its schedule for deploying this functionality will be, or how much such efforts will cost. Without clearly defining these commitments, ICE is at risk of not achieving independence from the existing system by 2015. Both agencies have generally implemented risk management practices, but they have had mixed results in managing requirements for their programs. While they have managed many of the risks in accordance with recognized leading practices, neither agency has identified all known risks and escalated them for timely review by senior management. Further, while CBP’s requirements management processes and practices are largely consistent with leading practices, key requirements activities were well underway before such practices were established. In addition, ICE was operating without documented requirements management guidance for several years, and its requirements development and management activities were mismanaged as a result. ICE has recently developed guidance that is consistent with leading practices, but has not yet implemented it. Risk management is a process for anticipating problems and taking appropriate steps to mitigate risks and minimize their impact on project commitments. According to relevant guidance, effective risk management practices include, among other things: establishing and documenting risk management strategies; assigning roles and responsibilities for managing risks; creating a risk inventory, documenting all risks in it, prioritizing them, and developing plans to mitigate them; and regularly tracking the status of risks and mitigation efforts, including the documentation of triggers to escalate risks for review by senior management. Of the four leading practices, CBP fully implemented two practices and partially implemented two practices (see table 1). Specifically, it has documented a risk management strategy and established roles and responsibilities for managing risks. However, while the agency established a risk inventory, it has not identified all of the risks facing the program. In addition, CBP only partially implemented tracking of risks because it did not define thresholds for risks that would trigger the automatic review by senior management and thus did not always escalate risks to senior management’s attention, and because it does not identify all information necessary for tracking risks. Reasons why concerns we identified were not documented as risks, include the program office officials’ view that the limitations in the integrated master schedule were not sufficient to warrant the additional resources that would be necessary to fix them, and that the lack of a fully defined schedule was not a program risk, as well as the existing contractor’s lack of skills and capability to implement earned value management. However, both an integrated master schedule and earned value management are important tools for effective program management and oversight, and the absence of such capabilities increases the risk that a program like TECS will not deliver its intended capabilities within cost and schedule commitment. Therefore, until all of the risks have been captured in the risk inventory with the necessary information to track the status, thresholds have been defined to trigger review by senior management, and relevant risks have been escalated to senior management in a timely manner, key decision makers will be less than fully informed. Further, the program will likely continue to experience the types of problems discussed earlier in this report. ICE fully implemented two of the leading risk management practices and partially implemented two others (see table 2). Specifically, it defined and documented a risk management strategy, and established roles and responsibilities for risk identification, tracking, and monitoring. The program office has also established a risk inventory with mitigation plans, but it has not identified all of the known risks. Further, while ICE has tracked the status of risks and mitigation efforts, it has not always followed its own processes for escalating risks outside of the program for senior management’s attention. According to ICE officials, they did not document the problems with the requirements backlog and technical solution in the program’s risk inventory because they did not want to make the risks visible until they understood the full extent of their scope, and they only included the risks in the inventory after attempts to address the problem failed. However, key to effective risk management is early identification of risks, so that they are known and visible as early as possible in order to manage and mitigate them, and ultimately, minimize impact to the program. Until all risks are captured in the risk inventory, thresholds are defined, and risks are shared with senior management in a timely manner, the program may continue to experience additional requirements and technical problems discussed earlier in this report. Well-defined and managed requirements are a cornerstone of effective system development and acquisition efforts. According to recognized guidance, a documented and disciplined process for developing and managing requirements can help reduce the risk of developing a system that does not meet user needs, cannot be adequately tested, and does not perform or function as intended. Such a process includes, among other things, establishing a process for developing and managing requirements to ensure that requirements are identified, reviewed, and controlled; assigning and defining the roles and responsibilities for all those involved in requirements management activities; eliciting user needs, translating them into requirements, and analyzing them to ensure that each requirement is unique, unambiguous, and testable; and defining a disciplined change control process. Of the four practices for requirements management, CBP fully implemented three and partially implemented one other (see table 3). Specifically, it established a requirements management process, assigned roles and responsibilities for requirements development and management activities, and defined a change control process. However, although CBP elicited user needs and translated them into requirements, CBP could not document how and if each requirement was analyzed to ensure that it is unique, unambiguous, and testable. Although CBP’s current requirements process largely addresses the leading practices, it was not established until March 2012, and so therefore was not used to guide requirements development for the majority of the program. Specifically, prior to March 2012, the program used the Passenger Systems Program Office requirements guidance for requirements elicitation and documentation, which, according to officials, was too generic to meet the needs of the program. In particular, the guidance allowed each of the projects to develop requirements independently of each other, and document them without standardization. According to CBP officials, the requirements for the projects that were developed earlier in the program—such as Secondary Inspection and High Performance Primary Query—were not as consistently well-formed or detailed as subsequent projects because of the lack of a rigorous process. Without well-defined and implemented processes for analyzing requirements to ensure that they are unique, unambiguous, and testable, CBP risks TECS Mod not performing as intended in the users’ environments, or taking longer to develop and test. For several years, ICE operated without an established requirements management process, which resulted in significant problems for the program. Although the agency began development of requirements in June 2009, the program did not have a documented requirements management process in place to guide its activities until March 2011, when ICE issued a requirements management process that reflected the program’s initial intent to use a traditional system development approach. However, that process became outdated a few months later in October 2011 when the program transitioned to an Agile development methodology. Rather than refine or replace its newly-issued requirements management process, officials proceeded without one until the current requirements management documents were issued in March 2013. As shown in table 4, ICE’s requirements development and management activities during this time only partially satisfied one of the four leading practices, and did not satisfy the other three. As a result of these limitations, program officials told us that they and their contractor did not complete work on over 2,500 requirements that were necessary for Release 1 to function properly. This lapse was not identified until fall 2012, when system prototypes, which had previously passed individual component tests, were combined and then tested in an end-to- end manner for the first time in ICE’s integrated test environment. According to ICE’s Program Manager, the system failed such testing because of the unaccounted-for requirements. Analysis performed by the program revealed that it would take an additional 10 months of work to address the missing requirements. In order to meet its schedule commitments, ICE decided to eliminate or defer about 70 percent of the original requirements for Release 1. This in turn has contributed to the difficulties the agency faces in delivering the entire modernized system before the 2015 deadline. In March 2013, ICE documented a new requirements management process for the Agile software development methodology it had adopted, and further established a change control board and standard operating procedure for managing changes to program requirements. Collectively, these two documents address all four of the leading practices called for in guidance as described below. ICE has defined a requirements management process that describes the practices to ensure requirements are elicited, reviewed, approved, and documented. For example, it describes the structure and tools to be used to organize and maintain the various types of requirements. The requirements management process identifies roles and responsibilities for requirements management. Specifically, the Requirements Manager, among other things, plans requirements management activities throughout the project development life cycle and maintains the requirements management strategy. The requirements analysts, among other things, participate in the elicitation, analysis, and refinement of program requirements. The requirements leads represent the needs of the product owner and the delivery team, and provide input on the prioritization of requirements. The requirements management process describes how user needs are to be collected and translated into requirements. The process also describes the attributes of a good requirement, including that it should be, among other things: (1) necessary—unique and not redundant to another requirement; (2) clear—not possible to interpret in more than one way and not in conflict with or contradictory to another requirement; and (3) verifiable—can be tested to determine whether or not the requirements is met. ICE has a defined change control process. Specifically, its new change control board standard operating procedure describes a process to ensure that (1) the process for system change requests is standardized, (2) system change requests are routed to appropriate staff for approval, (3) system change requests are processed in a timely manner, and (4) system change requests can be tracked. These requirements management processes are essential to ensure that the TECS Mod system meets mission needs, performs as intended, and avoids the additional costly and time-consuming rework that the program has recently experienced. Leading practices that we and others have identified note that oversight is a critical element of an investment’s life cycle, and that to be effective, oversight and governance bodies should, among other things, monitor a project’s performance and progress toward predefined cost ensure that corrective actions are identified and assigned to the appropriate parties at the first sign of cost, schedule, and/or performance problems; ensure that these corrective actions are tracked until the desired outcomes are achieved; and rely on complete and accurate data to review the performance of IT projects and systems against stated expectations, including comparing estimated schedule time frames to actual schedule (including schedule slippages and/or compressions) and comparing estimated costs with funds spent or obligated to date, any changes in funding, and the impact of these changes. As previously mentioned, DHS IT investments such as the two TECS modernization programs are overseen by governance bodies at multiple levels across DHS, including each programs’ Executive Steering Committees and DHS’s Office of the CIO. While the programs’ steering committees have the authority to oversee all aspects of the execution of the programs between gates, the Office of the CIO provides department-level oversight. To their credit, these governance bodies have taken actions to address three of the four leading practices. Specifically, CBP’s steering committee implemented two practices, although it is too soon to determine whether it has effectively implemented one of the other two practices; ICE’s steering committee implemented three practices; and the Office of the CIO implemented three practices. Table 5 shows whether or not each of the three governance bodies met the leading practices for performing oversight. As shown in the table, the governance bodies implemented three of the four leading practices: CBP Executive Steering Committee. This body has implemented two leading practices: it monitors the program’s performance and ensures corrective actions are identified. Specifically, it was chartered earlier this year and, as of June 2013, it has met three times since its formation. In these meetings, the committee reviewed the program’s cost and schedule performance, and assigned related action items to the appropriate individuals for closure. For example, during the February 2013 meeting, the committee discussed risks that could affect the program’s cost and schedule, and created an action item for the program manager to discuss risk mitigation strategies with the Component Acquisition Executive. The CBP Performance Manager stated that this action item was completed as of July 2013. In addition, the steering committee tracked action items from its initial meetings, but since there have only been three meetings as of June 2013, it is too soon to determine whether the committee is doing so consistently. ICE Executive Steering Committee. This body has implemented three leading practices: it monitors the program’s performance, ensures corrective actions are identified, and generally tracks the action items to completion. For example, it discussed the program’s cost and schedule performance in eight of the nine meetings since its inception in September 2011, has directed that actions be taken to address known issues, and generally tracked the action items to completion. Specifically, in a February 2013 meeting, the committee discussed schedule slippage and issues with cost estimates, and created an action item for the program to provide the committee estimated start and completion dates for a new life-cycle cost estimate. This action item was confirmed as “in progress” at the April 2013 meeting. The Office of the CIO. This office implemented three of the leading practices. Specifically, regarding monitoring, its Enterprise Business Management Office performs program health assessments to monitor an IT program’s performance through a review of program risk, human capital, cost and schedule, contract oversight, and requirements. The assessment results in a weighted score between 1 and 100 that is then converted to the five-level CIO risk rating published on the Office of Management and Budget’s IT Dashboard. The frequency at which the office performs these assessments is based on each program’s CIO rating of high, medium, or low risk. For example, it reviews high-risk programs monthly, medium-risk programs at least quarterly, and low-risk programs on a semiannual basis. When rating the TECS Mod programs, the Office of the CIO rated ICE’s program as medium risk in March 2013 and CBP’s program as moderately low risk in January 2013, which are the most recent ratings, as of July 2013. The Office of the CIO identifies corrective actions during the program health assessments and ensures the actions are tracked to closure through its TechStat review process. The CIO rating is used as one criterion to determine whether the program will be subject to a review. Any program that receives a high-risk rating is a candidate for a TechStat. As part of this process, the office assigns and follows up on corrective actions. However, neither program has been the subject of a TechStat because, as of July 2013, neither program was considered high risk. In addition, PARM monitors the performance of major acquisition programs across DHS in order to identify any emerging risks and issues (such as cost and schedule problems), and then provides data to decision makers. In doing so, the office assesses programs against 15 separate criteria, similar to what is assessed in the program health assessment, including risk and requirements management, and cost and schedule performance, and creates a Quarterly Program Accountability Report. The report describes programs’ value-to-risk ratio and, according to an agency official, is used as a tool to assess program risks and issues. PARM has created three of these reports thus far, but comparing the reports is difficult as the office changed the criteria and methodology to incorporate lessons learned. In the report for the third and fourth quarter of fiscal year 2012, the office rated both programs as high value, low risk. However, while the governance bodies had taken actions to oversee the TECS modernization programs, the lack of complete, timely, and accurate data have affected their ability to make informed and timely decisions, thus limiting their effectiveness in several cases. For example: Steering committees. In an April 2013 meeting, the CBP program manager briefed the steering committee on its target milestone dates; even though the agency told us a month later that it had not fully defined its schedule, raising questions about the completeness and accuracy of the proposed milestone dates upon which the committee bases its oversight decisions. Similarly, in a February 2013 ICE steering committee meeting, the office of the CIO noted that the agency’s program-provided life-cycle cost estimate was out of date and that a new one was needed before the program’s cost and schedule performance could be measured accurately. The Office of the CIO. In its most recent program health assessments, the Enterprise Business Management Office partially based its rating of moderately low risk on CBP’s use of earned value management; however, the program manager stated to us that the CBP program is not utilizing earned value management because neither they nor their development contractor had the capability to do so. Similarly, even though ICE had not reported recent cost or schedule data for its program—an issue that may signal a significant problem—OCIO rated ICE’s program as medium risk. The reliance on incomplete and inaccurate date raises questions about the validity of the risk ratings. PARM. In the most recent Quarterly Program Accountability Report issued in early July 2013, PARM rated programs both as high value with low risk. However, CBP’s low-risk rating is based in part on the program’s master schedule and acquisition program baseline; however, as we stated earlier, problems with the agency’s schedule raise questions about the validity and quality of those milestones. Further, the low-risk rating it issued for ICE is based, in part, on PARM’s Quarterly Program Accountability Report for April through September 2012, which rated the program’s cost performance with the lowest possible risk score. Yet, during that same time period, program documents show that cost and schedule performance was declining and varied significantly from its baseline. According to program documents, as of June 2012, TECS Mod had variances of 20 percent from its cost baseline and 13 percent from its schedule baseline. Moreover, both the cost and schedule estimates underlying the baseline were outdated. Further, the Quarterly Program Accountability Report is not issued by PARM in a timely basis, and as such, it is not an effective tool for decision-makers. For example, the most recent report was published on July 7, 2013, over 9 months after the reporting period ended. Since then, ICE has experienced the issues with its technical solution described earlier in this report; and, as discussed, these issues have caused the program to halt development and replan its entire acquisition. As a result, the newly-issued issued report is not reflective of ICE’s current status, and thus is not an effective tool for management’s use. Until these governance bodies base their reviews of performance on timely, complete, and accurate data, they will be limited in their ability to effectively provide oversight and to make timely decisions. After spending millions of dollars and over 4 years on TECS modernization, it is unclear when it will be delivered and at what cost. While CBP’s program has partially delivered one of the five major projects that comprise the program, program commitments are currently being revised, project milestones have changed over time, and the master schedule used by the program to manage its work activities and monitor progress has not been fully developed. These limitations raise doubts about the validity of the program’s schedule commitments and greatly impact the program’s ability to monitor and effectively manage its progress. A complete and integrated schedule provides the basis for valid schedule commitments; therefore it is important that as CBP revises its commitments, it ensure that its master schedule accurately reflects the work to be done, as well as the timing, sequencing, and dependencies between them. Moreover, ICE’s program has made little progress in deploying its modernized case management system, and is now completely overhauling its original design and program commitments, placing the program in serious jeopardy of both not meeting the 2015 deadline and delaying the deployment of needed functionality. It is therefore imperative that the agency quickly develop and execute its revised strategy for implementing TECS Mod—including the functionality to be delivered, when it will be delivered, and how much it will cost. Further, while both agencies have defined key practices for managing risks and requirements, the programs were not actively managing all risks and key requirements practices were developed after several key activities were performed. ICE in particular operated for years without a requirements management process, which resulted in poorly defined and incomplete requirements, and ultimately in costly rework and delays. Therefore, going forward, it is important that the programs implement these critical practices to help ensure that the program delivers the functionally needed to meet mission requirements and minimizes the potential for additional costly rework. Moreover, while DHS’s various governance bodies are generally following leading practices, they rely on data that are sometimes incomplete or inaccurate. Thus, it is important that DHS ensure that oversight decisions are based on complete and accurate program data. Until DHS’s governance bodies are regularly provided complete and accurate data for use in their performance monitoring and oversight duties, its oversight decisions may be based on incorrect or outdated data and, therefore, may be flawed or of limited effectiveness. To improve DHS’s efforts to develop and implement its TECS Mod programs, we recommend that the Secretary of Homeland Security direct the CBP Commissioner to ensure that the appropriate individuals take the following four actions: 1. develop an integrated master schedule that accurately reflects all of the program’s work activities, as well as the timing, sequencing, and dependencies between them; 2. ensure that all significant risks associated with the TECS Mod acquisition are documented in the program’s risk and issue inventory inventory—including acquisition risks mentioned in this report report— and are briefed to senior management, as appropriate; 3. revise and implement the TECS Mod program’s risk management strategy and guidance to include clear thresholds for when to escalate risks to senior management, and implement as appropriate; and 4. revise and implement the TECS Mod program’s requirements management guidance to include the validation of requirements to ensure that each is unique, unambiguous, and testable. We further recommend that the Secretary of Homeland Security direct the Acting Director of ICE to ensure that the appropriate individuals take the following three actions: 1. ensure that all significant risks associated with the TECS Mod acquisition are documented in the program’s risk and issue inventory—including the acquisition risks mentioned in this report— and briefed to senior management, as appropriate; 2. revise and implement the TECS Mod program’s risk management strategy and guidance to include clear thresholds for when to escalate risks to senior management, and implement as appropriate; and 3. ensure that the newly developed requirements management guidance and recently revised guidance for controlling changes to requirements are fully implemented. We also recommend that the Secretary of Homeland Security direct the Under Secretary for Management and acting Chief Information Officer to ensure that data used by the department’s governance and oversight bodies to assess the progress and performance of major IT program acquisition programs are complete, timely, and accurate. In written comments on a draft of this report, DHS agreed with seven of our recommendations and disagreed with one. The department described actions planned and underway to address the seven recommendations, and noted that it is committed to continuing its work toward full operational capability of its TECS Mod programs to enhance functionality for CBP, ICE, and other departments and agencies that have access to the system. The department also provided technical comments, which we incorporated as appropriate. Regarding our recommendation that CBP develop an integrated master schedule that accurately reflects all of the program’s work activities, as well as the timing, sequencing, and dependencies between them, DHS stated that CBP’s Office of Information and Technology believes that the current master schedule in use provides the requisite amount of visibility into program work activities, and that it considers the program’s scheduling efforts to be sound. Further, DHS stated that the timing and sequencing of TECS Mod’s key activities, as well as the dependencies of activities, are tracked via the program schedule. We do not agree that CBP’s current schedule provides either adequate visibility into program work activities, or that it includes the logical sequence of all key work activities, as well as the dependencies among them. As we state in our report, CBP had yet to define a detailed schedule for significant portions of the program. Moreover, approximately 65 percent of CBP’s remaining work activities were not linked to other associated work activities; therefore the program’s critical path could not be determined. As a result of these weaknesses, management is unable to determine how a slip in the completion date of a particular task may affect the overall project schedule. DHS also stated that CBP’s schedule is reviewed bi-weekly at integrated project team meetings, as well as monthly at the CIO program management reviews to track status and upcoming milestones. However, given the issues with the schedule reflected in this report, using the current, incomplete schedule to track progress is not effective. While DHS concurred with our recommendation that it ensure that data used by the department’s governance and oversight bodies to assess the progress and performance of major IT program acquisitions are complete, timely, and accurate, DHS stated that it has already taken such steps, citing its enterprise Decision Support Tool, the DHS Investment Management System, and the reporting of program cost, schedule, and operation performance information on the Information Technology Dashboard. On this basis, DHS requested that the recommendation be considered resolved and closed. However, while we acknowledge that these tools are currently in place, we identified instances where DHS governance and oversight bodies were acting on information that was not complete, timely, or accurate, despite the presence of the tools and systems cited by DHS in its response. As we go forward with our follow- up activities for this report, we plan to monitor DHS’s progress in improving the quality of data used in its assessments of major IT acquisition programs. Finally, DHS stated that our draft report did not adequately recognize the progress made by CBP’s TECS Mod program, specifically citing the strength of the program’s risk and requirements management practices and schedule, as well as the fact that the program has already implemented certain functionality. We did report that (certain weaknesses notwithstanding) CBP’s approach to risk and requirements management was generally consistent with leading practices. However, we also found significant deficiencies with CBP’s master schedule for TECS Mod. Further, we noted that the Secondary Inspection project was already operational at air and sea ports of entry across the country, and was operational at land ports of entry by September 2013 - approximately 6 months earlier than estimated. We also revised the report to reflect that a portion of the modernized High Performance Primary Query Service are currently in use to recognize additional CBP progress. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of Homeland Security. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our review were to (1) determine the scope and status of the two Department of Homeland Security (DHS) TECS Modernization (TECS Mod) programs, (2) assess selected Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) program management practices for TECS Mod, and (3) assess the extent to which DHS is executing effective executive oversight and governance of the two TECS Mod programs. To address our first objective, we reviewed a range of documentation from both programs, including each program’s functional requirements documents; their respective acquisition program baselines and associated program cost and schedule estimates; program planning documents, such as program management plans and test and evaluation master plans; as well as the results of oversight reviews of both programs from July 2011 to June 2013. To assess the scope of each program, we determined what functionality each program had committed to provide, and analyzed pertinent documentation, such as program management plans, mission needs statements, concept of operations documents, and operational requirements documents (among others) to determine whether those commitments had changed over time. We also compared the schedule and cost commitments listed in the programs’ initial documentation with subsequent baselines to establish the degree to which each program and its component subprojects had experienced changes in their start dates, completion dates, and estimated costs. Further, we corroborated statements made by CBP officials regarding the lack of completeness in their program master schedules by reviewing the completeness of their master schedule. Specifically, we examined the relationships that CBP documented (defined) between work activities within its master schedule for each project and the program overall. We used spreadsheet formulas to calculate what percentage of work activities linked to other work activities, and what percentage did not. We also interviewed relevant DHS officials to clarify and/or confirm information in the documents we reviewed and to more fully understand each program’s scope and status. To address our second objective, we examined program documentation, such as risk management and requirements management plans and processes, and compared them to relevant guidance from leading practitioners. Risk management: We compared relevant documentation, such as the CBP TECS Modernization Risk Management Plan and the ICE TECS Modernization Risk Management Plan, to relevant risk management guidance to identify any variances. We focused on the extent to which: (1) a risk management strategy had been established, (2) roles and responsibilities for risk management activities had been defined and assigned, (3) a risk inventory has been created that includes plans for mitigating risks, and (4) the status of risks and mitigation efforts is regularly tracked. We also reviewed lists of identified risks found in risk inventories, and minutes from meetings at which risks were identified, monitored, and closed. We compared risks identified by us during the course of our work to the risks in the risk inventories to determine the extent to which all key risks were being actively managed. We also reviewed briefings provided at executive steering committee meetings to ascertain the extent to which program risks were disclosed at these reviews. Further, we discussed actions recently taken and planned to improve risk management activities within both CBP and ICE. To assess the reliability of the risk tools, we analyzed the nature and quality of access controls for both the CBP and ICE risk inventories to ensure that the data in the inventories were reliable for our purposes. To assess the reliability of the information in the risk inventories we used in this report, we interviewed knowledgeable agency officials about the nature and quality of controls over both the CBP and ICE inventories, and reviewed the information in the inventories to identify missing or invalid data entries. We found that sufficient controls were in place, and we therefore determined that the information is sufficiently reliable. Requirements management: We compared relevant requirements management documentation, such as the CBP TECS Mod Requirements Management Plan, the Passenger Systems Program Office’s Change Management Process and Procedure, the ICE TECS Modernization Requirements Management Plan, and the ICE Change Control Board Standard Operating Procedure, to relevant requirements development and management guidance to identify any variances. We focused on the extent to which: (1) a process for developing and managing requirements had been established; (2) roles and responsibilities for requirements management practices had been defined and assigned; (3) user needs had been elicited, translated into requirements, and then analyzed to ensure that each requirement was unique, unambiguous, and testable; and (4) a change control process had been defined. We analyzed agency documentation showing the implementation of these activities, including evidence of requirements elicitation, analyses, review, and approval, as well as examples of change request documents. We interviewed program officials regarding the reasons for variances between the guidance and documentation and the status of actions recently taken and planned to improve requirements management activities within both CBP and ICE. To address our third objective, we analyzed documentation including executive steering committee meetings results, and reviewed program assessments from DHS’s Office of the Chief Information Officer and DHS’s Program Accountability and Risk Management office, and compared the results to relevant guidance such as our Information Technology Investment Management Framework to determine the extent to which DHS is providing effective executive oversight and guidance to the two TECS Mod programs. In addition, we compared the outputs of these governance structures (such as briefing slides, meeting minutes, and action items) to the ESC charters, and compared reports and assessments prepared by DHS governance bodies to DHS’s guidance for conducting such assessments. We also interviewed relevant officials from CBP, ICE, and DHS, as appropriate. We conducted this performance audit from December 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. TECS is a border enforcement system that supports the sharing of information about people seeking entry into the country. The system interfaces with several law enforcement systems and federal agencies, and supports the screening of people and conveyances who are inadmissible or may pose a threat to the country. In addition, it provides an investigative case management function for activities including money- laundering tracking and reporting; telephone data analysis; and intelligence reporting and dissemination. The following table provides a description of key systems and data associated with the passenger screening processes within TECS. CPB plans to deliver the following capabilities incrementally across five projects by September 2015. Specifically, Secondary Inspection: This project is to support processing of travelers referred from primary inspection for either enforcement or administrative reasons. The modernized version of Secondary Inspection, according to CBP, is to streamline the processing of encounters by eliminating the need for users to navigate through complex system menus to perform tasks, and minimize redundant data entry, as well as to simplify the interface so that all of the information is presented on a single screen. This project is to also provide web-based interface access to information such as relevant laws, policies, and forensics. In addition, this project is to provide a means to record the outcome of each inspection. According to CBP, Secondary Inspection is currently operational at all air, land, and sea ports of entry. High Performance Primary Query and Manifest Processing: This project is intended to improve TECS data search results in order to expedite the processing of manifests from individuals traveling to the United States on commercial or private aircraft, and commercial vessels. CBP plans to migrate the mainframe-based lookout records and other data to the modernized infrastructure, and replace the 1980s era databases and queries with modernized tools for the primary inspection process. It is to be fully operational by March 2015. Travel Document and Encounter Data: This project is intended to improve CBP’s ability to query and validate travel documentation for both passengers and their means of conveyance (whether people enter the country by air, sea, or land—by foot or in a vehicle). It is intended to modernize existing travel document data presented during primary and secondary inspections. It will also provide web-based interfaces intended to allow quick access to a passenger’s complete travel history, while also implementing appropriate data access restrictions and privacy protections in compliance with agencies’ data policies. It is to be fully operational by March 2015. Lookout Record Data and Screening Services: This project is intended to improve the efficiency of existing data screening and analyses capabilities by providing a means to quickly create, update, and send and receive lookout record data to external agencies, such as the law enforcement community. It is to be fully operational by March 2015. Primary Inspection Processes: This project is intended to modernize the overall inspection process and provide support for additional or random screening and communication functions. CBP states that this project will upgrade lookout record alarms and alerts sent to air, sea, and land primary and secondary workstations to ensure the safety of inspection officers. In addition, the project will modernize the user interfaces for alternate inspections—any inspection that is not conducted at an air, sea, vehicle, or pedestrian primary inspection location. It is to be fully operational by March 2015. In addition to the contact name above, individuals making contributions to this report included Deborah Davis (Assistant Director), Kara Epperson, Rebecca Eyler, Daniel Gordon, Dave Hinchman (Assistant Director), Sandra Kerr, Jamelyn Payan, and Jessica Waselkow. The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website (http://www.gao.gov). Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to http://www.gao.gov and select “E-mail Updates.” The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, http://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts Visit GAO on the web at www.gao.gov. . Please Print on Recycled Paper.
DHS's border enforcement system, known as TECS, is the primary system available for determining admissibility of persons to the United States. It is used to prevent terrorism, and provide border security and law enforcement, case management, and intelligence functions for multiple federal, state, and local agencies. It has become increasingly difficult and expensive to maintain because of technology obsolescence and its inability to support new mission requirements. Accordingly, in 2008, DHS began an effort to modernize the system. It is being managed as two separate programs working in parallel by CBP and ICE. GAO's objectives were to (1) determine the scope and status of the two TECS Mod programs, (2) assess selected CBP and ICE program management practices for TECS Mod, and (3) assess the extent to which DHS is executing effective executive oversight and governance of the two TECS Mod programs. To do so, GAO reviewed requirements documents and cost and schedule estimates, and determined the current scope, completion dates, and life cycle expenditures. GAO also reviewed risk management and requirements management plans, as well as governance bodies' meeting minutes. Customs and Border Protection (CBP) has defined the scope for its TECS (not an acronym) modernization (TECS Mod) program, but its schedule and cost continue to change; while Immigration and Customs Enforcement (ICE) is overhauling the scope, schedule, and cost of its program after discovering that its initial solution is not technically viable. CBP's $724 million program intends to modernize the functionality, data, and aging infrastructure of legacy TECS and move it to DHS's data centers. CBP plans to develop, deploy, and implement these capabilities between 2008 and 2015. To date, CBP has deployed functionality to improve its secondary inspection processes to air and sea ports of entry and, more recently, to land ports of entry in 2013. However, CBP is in the process of revising its schedule baseline for the second time in under a year. Further, portions of CBP's schedule remain undefined and the program does not have a fully developed master schedule. These factors increase the risk of CBP not delivering TECS Mod by its 2015 deadline. Regarding ICE's $818 million TECS Mod program, it is redesigning and replanning its program, having determined in June 2013 that its initial solution was not viable and could not support ICE's needs. As a result, ICE halted development and is now assessing design alternatives and will revise its schedule and cost estimates. Program officials stated the revisions will be complete in December 2013. Until ICE completes the replanning effort, it is unclear what functionality it will deliver, when it will deliver it, or what it will cost to do so, thus putting it in jeopardy of not completing the modernization by its 2015 deadline. CBP and ICE have managed many risks in accordance with some leading practices, but they have had mixed results in managing requirements for their programs. In particular, neither program identified all known risks and escalated them for timely management review. Further, CBP's guidance defines key practices associated with effectively managing requirements, but important requirements development activities were underway before these practices were established. ICE, meanwhile, operated without requirements management guidance for years, and its requirements activities were mismanaged as a result. For example, ICE did not complete work on 2,600 requirements in its initial release, which caused testing failures and the deferral and deletion of about 70 percent of its original requirements. ICE issued requirements guidance in March 2013 that is consistent with leading practices, but it has not yet been implemented. The Department of Homeland Security's (DHS) governance bodies have taken actions to oversee the two TECS Mod programs that are generally aligned with leading practices. Specifically, DHS's governance bodies have monitored TECS Mod performance and progress and have ensured that corrective actions have been identified and tracked. However, the governance bodies' oversight has been based on sometimes incomplete or inaccurate data, and therefore the effectiveness of these efforts is limited. For example, one oversight body rated CBP's program as moderately low risk, based partially on the program's use of earned value management, even though program officials stated that neither they nor their contractor had this capability. Until these governance bodies base their performance reviews on timely, complete, and accurate data, they will be constrained in their ability to effectively provide oversight. GAO is recommending DHS improve its efforts to manage requirement and risk, as well as its governance of the TECS Mod programs. DHS agreed with all but one of GAO's eight recommendations, and described actions planned and underway to address them.
DOD submitted the first version of its long-term corrosion strategy to Congress in December 2003. DOD developed this long-term strategy in response to direction in the Bob Stump National Defense Authorization Act for Fiscal Year 2003. In November 2004, DOD revised its long-term corrosion strategy and issued its DOD Corrosion Prevention and Mitigation Strategic Plan. DOD strives to update its strategic plan periodically, most recently in February 2011, and officials stated the next update is planned for 2013. The purpose of DOD’s strategic plan is to articulate policies, strategies, objectives, and plans that will ensure an effective, standardized, affordable DOD-wide approach to prevent, detect, and treat corrosion and its effects on military equipment and infrastructure. In January 2008, the department first issued DOD Instruction 5000.67, Prevention and Mitigation of Corrosion on DOD Military Equipment and Infrastructure, which was revised and reissued with the same title in February 2010. The stated purpose of the instruction is to establish policy, assign responsibilities, and provide guidance for the establishment and management of programs to prevent or mitigate corrosion of DOD’s military equipment and infrastructure. This instruction assigns the military departments’ Corrosion Executives responsibility for certain corrosion- prevention and control activities in their respective military departments. It requires the Corrosion Executives to submit information on proposed corrosion projects to the Corrosion Office with coordination through the proper military department chain of command, as well as to develop support, and provide the rationale for resources to initiate and sustain effective corrosion-prevention and mitigation programs in each military department. According to statute and DOD guidance, the Director of the Corrosion Office is responsible for the prevention and mitigation of corrosion of DOD equipment and infrastructure. The Director’s duties include developing and recommending policy guidance on corrosion control, reviewing the corrosion-control programs and funding levels proposed by the Secretary of each military department during DOD’s annual internal budget review process, and submitting recommendations to the Secretary of Defense regarding those programs and proposed funding levels. To accomplish its oversight and coordination responsibilities, the Corrosion Office has ongoing efforts to improve the awareness, prevention, and mitigation of corrosion of military equipment and infrastructure, including (1) hosting triannual corrosion forums; (2) conducting cost-of-corrosion studies; (3) operating two corrosion websites; (4) publishing an electronic newsletter; (5) working with industry and academia to develop training courses and new corrosion technologies; and (6) providing funding for corrosion- control demonstration projects proposed and implemented by the military departments. According to the Corrosion Office, these corrosion activities enhance and institutionalize the corrosion-prevention and mitigation program within DOD. In addition, the Director of the Corrosion Office periodically holds meetings with the DOD Corrosion Board of Directors and serves as the lead on the Corrosion Prevention and Control Integrated Product Team. The Corrosion Prevention and Control Integrated Product Team includes representatives from the military departments, the Joint Staff, and other stakeholders who help accomplish the various corrosion-control goals and objectives. This team also includes the seven Working Integrated Product Teams, which implement corrosion prevention and control activities. These seven product teams are organized to address the following areas: corrosion policy, processes, procedures, and oversight; metrics, impact, and sustainment; specifications, standards, and qualification process; training and certification; communications and outreach; science and technology; and facilities. Appendix A of DOD’s strategic plan contains action plans for each product team, including policies, objectives, strategies, planned actions, and results to date. The Corrosion Office began funding military-equipment and infrastructure corrosion-prevention projects in fiscal year 2005. Projects, including equipment-related projects, are specific corrosion-prevention and mitigation efforts, funded jointly by the Corrosion Office and the military departments, with the objective of developing and testing new technologies. To propose a project for Corrosion Office funding, the military departments first refer to requirements in DOD’s strategic plan. The requirements include initial submission of a project plan, and, if approved, future submissions of final and follow-on reports. The military departments’ proposals are evaluated by a panel of experts assembled by the Director of the Corrosion Office. The Corrosion Office generally funds up to $500,000 per project, and the military departments generally pledge matching or complementary funding for each project that they propose. The level of funding by each military department and the estimated return on investment (ROI) are two of the criteria used to evaluate the proposed projects. For the project-selection process, the military departments submit preliminary project proposals in the fall and submit final project proposals in the spring, and the Corrosion Office considers the final proposals for funding. Projects that meet the Corrosion Office’s criteria for funding are announced at the end of each fiscal year. Figure 1 provides an overview of DOD’s process for corrosion projects and notes which reports are required in each period. Specifically, project plans include several elements to be considered for funding by the Corrosion Office, according to DOD’s strategic plan. The project plans include a statement of need, a proposed solution, assumptions used to estimate the initial ROI, and a cost-benefit analysis of the project’s initial estimate of ROI. DOD’s strategic plan describes estimation steps for the cost-benefit analysis to include (1) calculating the project costs—such as up-front investment costs and operating and support costs, (2) calculating the benefits that are expected to result from the project—such as reduction of costs like maintenance hours and inventory costs, and (3) calculating the net present value of the annual costs and benefits over the projected service life of the proposed technology. According to Corrosion Office officials, once a project is approved and funded, project managers are typically responsible for overseeing the project and completing the reporting requirements. First, the project manager begins the research and development phase, also known as the demonstration phase. During this phase, project managers and project personnel test new technology, both in military laboratory and real-world settings. Typically, the demonstration phase takes 1 to 2 years, and the Corrosion Office requires submission of a final report upon completion of the demonstration. In this final report, project managers document test conditions, performance of the new technology, lessons learned, and their recommendations for the new technology to be transitioned to a military service’s use. Finally, project managers submit a follow-on report, which is a checklist, to evaluate a project within 2 years after a project is completed and the technology has transitioned to use within the military department. The purpose of the follow-on report is to inform the Corrosion Office of the overall outcome of the project and to reassess the ROI. The Strategic Plan provides detailed instructions on how to reassess the ROI. For example, the ROI reassessments consist of updating the costs and benefits associated with the new technology reviewing assumptions used earlier in computing the estimated ROI; resulting from the project; recalculating the ROI based on reassessed data; and providing an assessment of the difference, if any, between the estimated ROI and the reassessed ROI. Figure 2 provides a breakout of the number of projects that have reached various reporting milestones, as of May 2013. There were 128 equipment-related corrosion projects funded from fiscal years 2005 through 2012, in which 41 projects had reached the milestone for submitting final and follow- on reports, including ROI reassessments; 88 projects had only reached the milestone for submitting final 40 projects were not yet complete, thus they have not reached the milestone for submitting final or follow-on reports. In December 2010, we analyzed the extent to which the military departments have reassessed the ROI for funded corrosion-prevention projects. We found that the military departments did not complete required validations of ROI estimates and were unable to fully demonstrate the costs and benefits of their corrosion-prevention and control projects. We recommended, in part, that DOD fund and complete ROI validations. DOD concurred and noted that plans were already underway to address this requirement within the Corrosion Office and with the Corrosion Executives. Also, in September 2012, we reported that the Corrosion Office performs an analysis to determine the average ROI estimates for projects that it cites in its annual corrosion-control budget report to Congress. Additionally, we reported that the Corrosion Office did not use the most up-to-date data for the projects’ ROIs or provide support for the projects’ average ROI that was cited in its fiscal year 2013 corrosion-control budget report to Congress. We recommended that DOD provide an explanation of its ROI methodology and analysis, including the initial and, to the extent available, the reassessed ROI estimates. However, DOD did not agree with our recommendation. In its written comments, DOD generally restated the methodology included in DOD’s strategic plan, which the military departments use to estimate the projected ROI of each project. DOD did not provide any additional reasons why it did not use current return-on-investment estimates in its report to Congress. We reported in April 2013 that DOD has made some progress in completing the ROI validations but it needs to continue to follow through on completing the validations to fully demonstrate the costs and benefits of the corrosion projects. In May 2013, we reported that the Corrosion Office had not ensured that all reports on the results of its infrastructure-related corrosion projects were submitted. We recommended four actions to improve DOD’s project reporting and tracking, and the accuracy of its ROI data. However, DOD partially agreed with our recommendation to take steps to enhance the tracking and reporting of its infrastructure-related corrosion projects. In written comments, DOD stated it is developing a web-based tracking tool for the Corrosion Office, Corrosion Executives, and project managers to input and extract project-related data. In regard to the recommendation that DOD take action to ensure that its records reflect complete, timely, and accurate data on the projects’ ROI, DOD partially agreed with the recommendation and stated the web-based system would provide data including ROI estimates. While DOD cited the web-based system to address our recommendations, DOD did not state when the new system would be available for use. Further, DOD did not agree with our recommendation that the Corrosion Office use its existing authority to identify and implement possible options or incentives for addressing reasons cited by project-management offices for not meeting reporting milestones. In written comments, DOD did not state what actions it would take to improve submission of completed reports from the military services that DOD’s strategic plan requires for infrastructure-related corrosion projects. Also, DOD did not agree with our recommendation to revise guidance to clearly define the role of Corrosion Executives to assist the Corrosion Office in holding departments’ project-management offices accountable for submitting reports in accordance with DOD’s strategic plan. DOD stated that further guidance is not necessary as the requirements are clearly stated in the strategic plan. All the related GAO products are listed at the end of this report. DOD’s Corrosion Office has collected a majority of required final and follow-on reports from project managers on the results of equipment- related corrosion projects and is taking steps to obtain outstanding reports. As of May 2013, our review found that the military services submitted the majority of the required reports. Project managers had submitted the required final reports for 55 of the 88 projects (about 63 percent) funded from fiscal years 2005 through 2010. Also, for 27 of the 41 projects (about 66 percent) that were funded from 2005 through 2007, we found that the project managers had submitted the required follow-on reports on whether the corrosion-control technologies were effective and the overall effect of the projects. Military departments’ Corrosion Executives and project managers described various reasons for not meeting milestones for all reports, such as personnel turnover, funding, and demonstration phases lasting longer than anticipated. To improve the collection of reports, DOD is taking steps to obtain outstanding reports. DOD has invested more than $63 million in 88 equipment-related corrosion projects funded from fiscal years 2005 through 2010. Project managers submitted a majority, but not all, of the required reports on whether the corrosion-control technologies were effective and the overall effect of the projects. The DOD Corrosion Prevention and Mitigation Strategic Plan states that project plans should include a schedule milestone for reporting, including final reports and follow-on reports. The DOD strategic plan requires a final report at project completion, and requires a follow-on report 2 years after project completion and transition to use within the military departments. According to Corrosion Office officials, these reports provide valuable information on the results of corrosion projects and in planning future projects. Corrosion Office officials stated that project managers must submit final reports at project completion, which is typically within 2 years after the receipt of the funding of each project. As stipulated in DOD’s strategic plan, final reports should include certain content, such as an executive summary, lessons learned, recommendations, and conclusions. We found that 55 of the 88 required final reports (63 percent) for projects funded in fiscal years 2005 through 2010 had been submitted. There was variation, by military service, in the number of submitted final reports. For example, the Marine Corps had not submitted three-quarters of its final reports. The Air Force, in contrast, had submitted all but one final report. Table 1 shows the status of final reports submitted by each service for equipment-related projects. We found that project managers submitted 27 of the 41 required follow-on reports (66 percent). The military services varied in the number of outstanding follow-on reports. For example, the Navy had not submitted half of its follow-on reports. In contrast, the Army, Marine Corps, and Air Force had only one outstanding follow-on report. DOD’s strategic plan requires the submission of follow-on reports within 2 years after a project is completed and transitioned to use in the military department. According to Corrosion Office officials, this transition period includes up to 1 year to implement the technology in a military department. Corrosion Office officials also told us that they expected the follow-on reports to be submitted within 5 years of initial funding. Therefore, follow-on reports for 41 completed projects funded in fiscal years 2005 through 2007 were due on or before the end of fiscal year 2012. DOD’s strategic plan states that the follow-on reports should include an assessment of the following areas: project documentation, project assumptions, responses to mission requirements, performance expectations, and a comparison between the initial ROI estimate included in the project plan and the new estimate. Table 2 shows the status of follow-on reports submitted by each service. According to officials in the Corrosion Office, final and follow-on reports are used to assess the effectiveness of the corrosion projects and determine whether continued implementation of the technology is useful. As Corrosion Office officials review project managers’ final reports, they stated that they focus on any lessons learned, technical findings, conclusions and recommendations, and whether the results from the report should trigger follow-on investigations of specific technology and a review for broader applications of the technology. Officials stated that they review follow-on reports to assure necessary implementation actions have been taken and to review changes in the ROI estimates. Corrosion Office officials stated that they are taking steps to obtain the completion and submission of all outstanding reports. For example, according to the Corrosion Office, its officials regularly send the military departments’ Corrosion Executives a report listing final and follow-on reports that have not yet been completed and submitted and requesting that the Corrosion Executives follow up with project managers to complete the reports. According to Corrosion Executives, they coordinate through their department and if the reports have not yet been completed, they obtain an explanation and expected completion date and provide the information to the Corrosion Office. Finally, according to Corrosion Executives, they communicate any delays to the Corrosion Office verbally and by e-mail to ensure the Corrosion Office is aware if a demonstration period takes longer than originally anticipated or if a project has been delayed due to unexpected laboratory or field testing issues Corrosion officials in the military departments described various reasons why project managers did not complete and submit mandatory final and follow-on reports within expected time frames, including personnel turnover, funding, and demonstration phases lasting longer than anticipated, all delaying the completion and submission of the reports. For example, Air Force and Marine Corps corrosion officials stated that most teams retain key personnel throughout each project, but at times, turnover results in teams delaying completion of their reports. Additionally, Army corrosion officials stated that while their project was approved by the Corrosion Office to start its demonstration at the beginning of the fiscal year, the demonstration started much later than expected because funding from the Corrosion Office for the project was delayed due to the use of continuing resolutions to fund government operations. Finally, the Navy’s Corrosion Executive stated that some demonstrations last at least 3 years because the new technology or method is tested on at least two carrier deployments, and each deployment cycle can last 18 months. DOD requires the military departments to collect and report key information from corrosion projects about new technologies and methods to prevent and mitigate corrosion in military equipment to the Corrosion Office; however, DOD does not have complete information about the benefits of all of its projects and is sometimes unable to determine whether projects achieve their estimated ROI. Specifically, the military departments are collecting and reporting some measures of achievement of the projects, including results, but do not always report details in follow- on reports about features and benefits of completed projects, such as when outcomes prompted changes to specifications, standards, and various reference and guidance documents. Further, the military departments are not collecting required information on the assumptions used to compute the estimated ROI in the project plan, and are unable to determine whether the projects are achieving the estimated ROI. The military departments have collected and reported measures of achievement of completed corrosion projects other than ROI, such as when outcomes prompt changes in specifications, standards, technical manuals, and other reference or guidance documents. However, the departments’ follow-on reports do not always include details of the achievements, including specific benefits. DOD Instruction 5000.67 requires the military departments’ Corrosion Executives to develop procedures for corrosion planning, process implementation, management, review, and documentation of results. Additionally, the DOD Corrosion Prevention and Mitigation Strategic Plan requires the submission of a checklist, which the department refers to as a follow-on report, to note specific information about the corrosion project. The follow-on report, which consists of a checklist, shows items to be reviewed on the status and the results of corrosion projects that have completed research and development, transitioned to service use, and been in use for 2 years. Project managers have the option to include comments on details about items on the checklist. Appendix II shows a copy of the checklist used for project review. According to the strategic plan, the checklist is to focus not only on reassessing the ROI, but also on examining and assessing other benefits of the project. Project managers are required to review documentation, such as specifications, technical manuals, and other guidance; implementation, maintenance, and other sustainability costs; and actual or intended application of the technology by others. Then, project managers are to check “yes” or “no” for each item, but are not required to write details about any benefits of the project. DOD’s strategic plan allows the project managers the option to provide detailed comments in the follow-on report, but does not provide specific guidance requiring them to document benefits. Finally, according to Standards for Internal Control in the Federal Government, control activities—including appropriate documentation of transactions that should be clearly documented and keeping documentation readily available for examination—are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and for achieving effective results. During our review of all available follow-on reports, we found that the nearly three-quarters (22 of the 30 follow-on reports) contained information on some measures of achievement, such as whether new technology or methods were incorporated in maintenance manuals, technical orders, or engineering change proposals. These project managers for these reports modified the follow-on report to include additional details that clearly acknowledge the benefits of the project, such as incorporation into specifications, technical manuals, and other guidance. For example, a joint Army and Navy project in our sample examined aircraft corrosion prevention and control by testing gaskets to prevent corrosion of antenna wiring. The project resulted in the Army communicating the benefit of the antenna gasket by authorizing its use, giving it a part number, and revising a technical manual. Also, the Navy assigned the gasket a part number, authorized its use, and revised a maintenance manual. However, we also found that one quarter (8 of 30) of the follow-on reports contained little to no narrative detail and did not document the benefits of the project. For example, an Army project’s follow-on report contained no information about achievements, and a Navy project’s follow-on report provided little details about the project’s outcomes that could reduce cost and reinforce mission readiness. Without specific guidance to require that follow-on reports include details of measures of achievements other than ROI, including benefits, the Corrosion Office will be missing the opportunity to know whether equipment-related corrosion projects have achieved outcomes to prevent or mitigate corrosion. The military departments’ project plans include an initial estimated ROI for each equipment-related corrosion project that is based on specific assumptions, but the departments’ project managers and project personnel have not collected data to determine whether each project achieved its estimated ROI. DOD’s strategic plan provides guidance on estimating the ROI, collecting information to verify the ROI, and achieving the ROI. First, the strategic plan states that project plans include assumptions that are used to initially estimate the ROI, and provides a list of assumptions that includes: replacement costs and intervals; maintenance costs, including unscheduled maintenance and repair cost; labor and other operating costs; and readiness savings. Second, the strategic plan provides guidance on collecting information on the estimated ROI for corrosion projects that have completed research and development and transitioned to service use (i.e., whether a service implemented the demonstrated technology or method). Specifically, project managers are required to collect information to check on any changes to the assumptions used in the initial estimated ROI in order to compare, or recompute, the ROI and determine if the ROI is higher than, lower than, or as originally estimated. Finally, the strategic plan identifies a strategy to justify funding for corrosion projects by verifying the initial investment of corrosion projects and cites a long-term objective to achieve ROI for equipment-related corrosion projects, thus providing a metric to assess progress. During our review, we found that all project plans in our sample included required assumptions as well as plans and methods to collect information on those assumptions. Our sample included the following examples in which the project managers and project personnel estimated the ROI in the project plan based on certain assumptions and indicated they would collect information when the technology or method was transitioned to service use. Army—Officials projected an ROI (i.e., benefit) of $46.75 for every dollar invested in this project to prevent corrosion. The project, funded in fiscal year 2008, tested a commercially available dehumidification technology to protect the radar system on Patriot missile systems, whose internal components generate extreme amounts of heat. According to the project plan, the ROI was based on assumptions including reduced labor and material maintenance costs. The project plan stated that staff would collect ROI-related data by tracking the rate of corrosion, including visual inspections of units with and without the technology and by an examination of maintenance logs. However, according to project personnel, they reassessed only some of the original assumptions—such as the annual cost of corrosion maintenance costs for the Patriot radar system—and did not track or collect data to verify the assumptions used for the estimated ROI in the project plan. Thus, they will be unable to compare or recompute the ROI as required by the strategic plan. Marine Corps—Officials projected an ROI of $189.74 for every dollar invested in this project to prevent corrosion. The project, funded in fiscal year 2010, tested supplemental coatings to protect tactical and armored ground weapon systems against corrosion. According to the project plan, the ROI was based on assumptions including testing on the Mine Resistant Ambush Protected (MRAP) vehicle system, a 50 percent reduction in annual maintenance costs, and a 15-year service life. The project plan stated that data would be collected by annually monitoring weapon systems with and without these particular coatings to verify the estimated ROI in the project plan. However, according to project personnel, they plan only to provide an update of the original assumptions. Thus, they will be unable to compare or recompute the ROI as required by the strategic plan. Air Force and Navy—Officials of the joint project predicted an ROI of $61.32 for every dollar invested in this project to prevent corrosion. The project, funded in fiscal year 2005, tested the use of aerosol paint cans to address potential corrosion of aircraft coatings and meet the requirements of rapid cure and rapid application in austere environments when spray-application equipment is not available. According to the project plan, the ROI was based on assumptions including the estimated cost of paint and repair, the expectation to save approximately 5 percent of the paint cost in reductions in material preparation and clean-up, and decreased manpower requirements associated with applying paints and repairing corrosion. According to the project manager, he could collect certain data, such as how many cans were ordered through the supply system, but could not determine if personnel purchased aerosol paint cans from other sources to estimate savings. Thus, they will be unable to compare or recompute the ROI as required by the strategic plan. Additionally, project managers and project personnel in our sample stated that they have not collected information on the assumptions used in the initial estimated ROI to compare or recompute the ROI, such as information on the quantity of military equipment that has transitioned to service use. Rather, the Corrosion Executives and the majority of project managers and project personnel whom we interviewed stated that their procedure has been to reassess only the accuracy of the assumptions of the estimated ROIs. Further, Corrosion Executives as well as project managers and project personnel for 40 of the 43 projects in our sample (or 93 percent) stated that they have not collected information to verify the initial investment and determine if a project is achieving the estimated ROI stated in each project plan because of the difficulties in doing so. For example, some project managers and project personnel explained that they rely on repair personnel to collect and record data on the performance of a new technology or method, which would provide data to verify the initial investment in corrosion projects. However, according to Corrosion Executives, project managers, and project personnel, the repair personnel do not have a consistent way to collect and record the data. Also, some project managers and project personnel stated it is difficult to monitor progress of a new corrosion-related technology or method because the maintenance and repair community does not always note in maintenance records the reason for repair or replacement. For example, officials on an Air Force project noted that when electronic circuit cards failed and were removed from aircraft, repair personnel removed them and inserted new ones, but did not take the time to figure out why they failed (such as whether sand and salt corroded the electronic circuit cards). In some cases, the new technology or method goes beyond affecting one military service, and effective recordkeeping would involve the other services tracking, collecting, and reporting back information on their use of the new technology or method, but we found that such recordkeeping is not done consistently. Further, some equipment-related projects are driven by environmental concerns, such as those aimed at finding an alternative chemical to use to prevent corrosion. According to officials, these concerns, such as measuring the environmental effect on reducing pollution, are difficult to measure. Consequently, the military departments and DOD management have been unable to determine whether the projects are achieving their estimated financial benefits. Officials from the Corrosion Office acknowledged that project managers have not followed DOD’s strategic plan regarding collecting information to verify whether projects are achieving the benefits initially estimated in project plans because of challenges in collecting and monitoring relevant data. Corrosion Office officials stated that their original intent was for the project managers to monitor the assumptions and collect updated information, but now the Corrosion Office officials recognize that project managers did not always collect all the needed data. On the basis of the identified challenges, Corrosion Office officials stated that they plan to revise the strategic plan to eliminate the guidance on validating the ROI and to provide revised guidance on how the project managers should be reassessing the ROI. They stated that the revision is planned for late 2013. DOD has taken steps to improve oversight of its equipment-related corrosion projects, such as revising its DOD Corrosion Prevention and Mitigation Strategic Plan to provide additional guidance on reporting requirements. However, DOD does not have a comprehensive overview of the status of all equipment-related corrosion projects. While the reports provide the status for each project, GAO found that the Corrosion Office does not consolidate information to monitor the status of all these projects, such as if a project has not transitioned to service use or has been discontinued. Further, we found that project managers vary in how they reported the ROI for discontinued projects. DOD’s Corrosion Office has taken steps to develop and revise policies and guidance to help improve the management and oversight of equipment-related corrosion projects. For example, the Corrosion Office developed, and has subsequently revised the DOD Corrosion Prevention and Mitigation Strategic Plan. Also, officials from the Corrosion Office stated that they have updated reporting requirements to include quarterly status reports on the technical, programmatic, and financial status of the projects. Further, Corrosion Office officials explained that the establishment of the military department Corrosion Executives has helped improve their management of corrosion programs. Starting in 2009, each military department designated a Corrosion Executive to be the senior official in the department with responsibility for coordinating corrosion- prevention and control program activities. For example, Corrosion Executives and the Corrosion Office do an annual review of equipment- related corrosion projects to review project status and transition as well as deadlines for final reports, follow-on reports, and ROI reassessments. According to the Corrosion Office, each Corrosion Executive coordinates through the respective military department’s chain of command to provide information on corrosion projects to the Director of the Corrosion Office. Further, quarterly status reports are required starting the first week of the fiscal quarter after the contract award and every 3 months thereafter until the final report is submitted, and officials from the Corrosion Office also conduct an annual review of each project. Finally, the military departments have developed and implemented service-specific strategic plans for corrosion prevention. The Corrosion Office collects reported details of individual corrosion projects, including some status information, but does not consolidate the information for an overview of the status of all its projects, which is a key part of its oversight role. Project managers submit many project details in their reports to the Corrosion Office, such as whether a project has been recommended for transition to service use and the status of the transition; whether a project has been recommended for transition to service use but did not transition; and whether a project has not been recommended for transition or discontinued. Corrosion Office officials stated that they maintain some consolidated data in a spreadsheet, such as the project’s identification number, fiscal year, funded amount, and ROI. However, the office has not consolidated all key information about the projects so that officials can regularly monitor their status and plan to implement new corrosion-prevention technology or methods into the military departments’ operations. Instead, most key information on the status is listed in individual final and follow-on reports. DOD Instruction 5000.67 requires that the Corrosion Office develop an overarching, long-term corrosion-prevention and mitigation strategy. The instruction also requires that the Corrosion Office implement programs to ensure that military departments throughout DOD take a focused and coordinated approach to collect, review, reassess, and distribute information on proven methods and products that are relevant to prevent corrosion of military equipment. Also, the instruction requires Corrosion Executives to develop procedures for corrosion planning and implementation, and to review, manage, and document results. During our review of the 43 projects in our sample, we found 14 (approximately 33 percent) of the projects performed well and each one’s technology or method was implemented for use by a military department, and 7 (approximately 16 percent) of the projects performed well and were recommended for use by a military department but the military department was not using it. We found varying reasons for military departments not using a proven technology or method, such as the need for additional field testing. Finally, we found 4 (approximately 9 percent) of the projects did not perform as expected during the demonstration phase, and were discontinued. The remaining projects in our sample, 18 (approximately 42 percent of the projects), were still in the demonstration phase. However, Corrosion Office officials stated that they could not readily provide information on the status of the projects’ implementation, including whether projects were demonstrated successfully; were recommended for a military department’s use but are not yet in use; or had been discontinued. To provide an overview of the detailed status of all projects, the officials stated that they would have to review each final report and compile a list because the Corrosion Office does not use a tool or method to consolidate such information when the office receives each project’s report. Without a mechanism to consolidate projects’ status to facilitate monitoring of whether the projects’ demonstrated technology or methods are being used by military departments, the Corrosion Office and the Corrosion Executives may not have timely information to know whether the technology demonstrations produced proven methods and products to prevent the corrosion of military equipment. During our review, we found that project managers varied in how they reported discontinued projects and how they reported reassessed ROIs for projects that had technology or methods recommended for a military department’s use but were not being used. According to the DOD strategic plan, a final report is required at project completion and is to include certain content, such as recommendations on whether to transition the technology or method to use in the military department. The plan also requires the submission of follow-on reports within 2 years after a project is completed and transitioned to use in the military department, and is to include a reassessed ROI. In reviewing project reports, we found seven instances of projects that had technology or methods recommended for a military department’s use, but were not being used; however, the Corrosion Office provided documentation that the ROIs were reassessed for three of the projects. In one example, a follow-on report showed one project’s results were awaiting validated data on benefits so it had not yet been implemented by a military department. By contrast, Corrosion Office records showed the project’s ROI was verified as a cost benefit of $141.30 for every dollar invested in this project, which suggested that the office considered the project to be implemented. Additionally, we identified four projects that did not perform as expected during the demonstration phase and were discontinued. According to Corrosion Office officials, project managers still needed to submit follow- on reports, including verifying the estimated ROI. In interviews with project personnel, we found differences in how the military departments reported reassessed ROIs for discontinued projects in the follow-on reports. For example, the Air Force reported the reassessed ROI for discontinued projects as zero, while the Army reported the reassessed ROI for discontinued projects to be the same as in the initial project plan. Army officials stated that they believed they were following DOD guidance in how they report ROI for discontinued projects. However, we found no guidance in DOD’s strategic plan about how to report the ROI when a project is discontinued, and Corrosion Office officials confirmed that they have not provided such guidance. Corrosion Officials were unaware of specific discontinued projects and were unable to readily provide us a list of these projects. Without guidance to specify how project managers should report the ROI for discontinued projects, the Corrosion Office may receive varying reports about ROIs and have an incomplete picture of the success of projects. The military departments have identified lessons learned from their equipment-related corrosion projects and shared some lessons with corrosion-related personnel; however, DOD has no centralized and secure database or other source to share lessons from all project final and follow-on reports, including those with sensitive information. The military departments have incorporated some lessons from proven technologies or methods into maintenance guidance and repair procedures for military equipment. DOD uses both formal and informal methods to share lessons learned from corrosion projects, and is in the early stages of developing a single database that can share the lessons from final and follow-on reports, and do so in a secure system that can archive sensitive information about projects. The military departments have identified lessons learned from their projects to prevent or mitigate corrosion of military equipment. These lessons are described in the projects’ final reports. Further, military departments have used the lessons learned to change maintenance guidance and repair procedures in some cases. The military departments have followed guidance in DOD’s strategic plan to include lessons learned in the final report for each corrosion project. We found that project managers and project personnel were identifying lessons learned in the demonstration phase in lab books, journals and final reports. Also, during our review of all submitted final reports, we found that every final report included lessons learned. Additionally, project managers and project personnel stated that they identify lessons learned by an examination of testing conditions, observations and analysis of successful and unsuccessful trials, and examining problems. For example, one Navy project was successful in the lab, but the project manager found that personnel in the field were not completing all the steps necessary to make a particular protective paint coating effective to prevent corrosion. The project was discontinued because the corrosion method would not be successful in the field. Further, project managers told us that they collect these lessons learned throughout project demonstration, by recording analysis in lab books, which become part of the laboratory record, as well as collecting data in the field. For example, the project manager and project personnel who examined corrosion of electronic circuit cards due to sand and salt stated that they collected lessons learned while the weapon system was deployed. Also, they collected lessons learned as the weapon systems were returned to their home station, and found humidity at the home station greatly increased corrosion, an unexpected result that was documented in the final report. The military departments have incorporated lessons learned in guidance or other information that will allow them to use the proven methods and products. All project plans in our sample included plans or methods to transition projects to military department use and incorporate what they learned to change maintenance and repair procedures or allow the use of new technology. During our review of final and follow-on reports, we found that lessons learned from equipment-related corrosion projects were incorporated primarily through the military performance specifications; proposals for engineering changes; services’ technical orders; or through DOD-wide military specifications. For example, A Navy project developed cost-effective, corrosion-resistant boxes to protect electrical equipment, indicator lights and connectors used on Navy ships. The Navy issued a message, established stock numbers, made drawings, and changed specifications to replace the boxes. An Army project tested a protective covering for cable connectors on the Patriot Missile System. As a result of the demonstration, the Army’s Aviation and Missile Command’s corrosion officials recommended the covers be part of repair kits and installed during scheduled depot overhauls. Project personnel are working to develop an Engineering Change Proposal to incorporate kits for these protective coverings into repair procedures at an Army depot, and plan to coordinate the assignment of National Stock Numbers for the kits when the Engineering Change Proposal is approved. An Air Force project tested and evaluated several rapid-cure roller/brush and aerosol-applied coating systems for airplanes. The final report recommended the aerosol system for implementation. As a result of the project, the Air Force modified a technical order to authorize the use of aerosol cans to apply protective coatings to an aircraft. Finally, changes to DOD-wide military specifications are another way for lessons learned to be incorporated. For example, the Air Force created a DOD standard to be used by industry and DOD for screening new material technologies. Similarly, the Marine Corps created a DOD standard to provide protective coatings for tactical and armored vehicles. In addition, some lessons learned were incorporated into planning for future projects. For example, a Marine Corps project was examining improved methods to remove specialty coatings on vehicles in a corrosion repair facility or depot. The process can take 32 hours to complete, during which time the vehicle is unavailable for other repair activities. According to the project manager, the project benefitted from lessons learned during a prior Marine Corps project examining coating repairs. In another example, the Air Force established a requirement for outdoor testing of protective coatings for aircraft after several project managers found that some protective coatings yielded contradictory results in the laboratory as compared to outdoor exposure. DOD has several methods for informally or formally sharing some lessons learned from corrosion projects. Most lessons learned are shared informally through conferences, working groups, and personal contacts, according to Corrosion Executives, project managers, and project personnel. While DOD has taken steps for a structured, formal process to share information, such as by establishing a DOD corrosion website and archiving final reports in the Defense Technical Information Center (DTIC) database, neither the website nor the database has all lessons learned from equipment-related corrosion projects. Military departments’ Corrosion Executives, project managers, and project personnel stated that lessons learned are shared in specific ways, such as through past conferences, working groups, and personal contacts. For example: Conferences: The Corrosion Office has hosted the triannual DOD Corrosion Forum—involving the military departments, private industry, academia, and other government agencies—to share information on the negative effects of corrosion on readiness and safety. Corrosion officials whom we interviewed emphasized the importance of sharing lessons learned at past conferences. Conferences have also included briefings on project ideas and project submissions. For example, the Air Force Corrosion Managers Conference included a briefing on the results of a project on rapid-cure coating for aircraft. However, according to a 2013 DOD budget memorandum, conferences have been curtailed except those for mission-critical activities and must be approved by component heads or senior officials designated by the component head. Subsequently, DOD plans to hold for the first time the DOD Corrosion Conference 2013 by means of a webinar. Working Groups: The Corrosion Office has a number of Working Integrated Product Teams to discuss and share corrosion information, such as the Corrosion Policy, Processes, Procedures, and Oversight; Communications and Outreach; and Science and Technology teams. Further, officials supporting weapon systems have working groups examining corrosion for their specific systems. For example, Air Force officials examining the use of specific gaskets on C-17 aircraft presented the project’s results, including lessons learned, to the C-17 Corrosion Prevention Advisory Board. Furthermore, these Air Force officials explained that most major weapon systems have a Corrosion Prevention Advisory Board, which consists of a team of engineers, depot personnel, and industry officials, as a best practice to discuss technology issues related to corrosion and corrosion management for their weapon system. Personal Contacts: During interviews with project managers and project personnel, we found examples of sharing corrosion information through emails, phone calls, and coordination on joint projects. For example, Marine Corps corrosion officials stated that because they share equipment with the Army through acquisition and other processes, they are knowledgeable of Army lessons learned from corrosion projects. DOD has established formal methods to share many lessons learned with officials working to prevent or mitigate corrosion of military equipment, such as through websites or databases. In 2003, the department established a DOD corrosion website that includes an online library, information on submitting project plans, some nonsensitive final reports, and a members-only section for sharing working-group findings. Additionally, project managers and project personnel stated that they post some information on lessons learned on service-specific corrosion websites, including the Air Force Corrosion Prevention and Control Office website and the Army Aviation and Missile Command Corrosion Program Office website. Further, according to corrosion officials and project managers, the final reports are being archived, as required, at DTIC. We also found lessons learned are shared in departmental databases, such as the Naval Surface Warfighter Center database. DOD officials have methods to share some lessons from projects, such as information in final reports, but do not have a centralized and secure database in which corrosion personnel across DOD can access lessons from reports about all completed corrosion projects, including projects involving sensitive information. DOD has archives of final reports in DTIC, but the DTIC system does not include other information about corrosion projects, such as follow-on reports that contain information on the implementation of the projects. The DOD website has some final reports, but it does not post other information that is considered sensitive. The establishment of the website is cited as an accomplishment for one of the goals in the DOD strategic plan. However, currently DOD has not consolidated all project data and outcomes in a way that is available and accessible to all relevant personnel. DOD’s strategic plan states that DOD and the military departments should use rapid and effective web-based strategies for communicating and sharing best practices, including a centralized database to capture corrosion-related technical information across the services to enhance communication, leverage problems, and minimize duplication. Also, DOD Instruction 5000.67 requires that the Corrosion Office’s long-term strategy for corrosion prevention and mitigation of military equipment provide for the implementation of programs, including supporting databases, to ensure a focused and coordinated approach throughout DOD to collect, review, reassess, and distribute information on relevant proven methods and products. Finally, Standards for Internal Control in the Federal Government states that federal program managers should have pertinent information distributed in a form and time frame that permits them to perform their duties efficiently. According to Corrosion Executives, project managers, and project personnel, DOD and the military departments could benefit from a coordinated, centralized approach to archive all relevant information, including sensitive information that should not be disclosed to the general public, on methods and products proven to prevent or mitigate corrosion of military equipment. Also, a Defense Science Board report on corrosion control stated “when properly implemented, lessons learned from the corrosion program will drive future design, acquisition, and performance specifications.” To meet its goal to share lessons throughout the department, DOD has begun work to develop a database that would contain relevant information, including lessons learned, on all projects and their outcomes—including sensitive or proprietary information. However, officials at the Corrosion Office stated they are in the early stages of developing the database and are unsure when it will be completed. For example, they are still considering how the information would be accessible in a secure way, such as through a nonpublic portal of its corrosion website or through another DOD portal. Until a comprehensive, centralized, and secure database is developed that includes lessons learned from all completed corrosion projects, including those with sensitive information, officials from DOD’s corrosion community will not have full and complete information on lessons learned, including proven methods or products to prevent or mitigate corrosion of military equipment. DOD relies on the outcomes of its corrosion projects to reduce the life- cycle costs of its military equipment through the timely sharing of information about successful projects with all relevant officials in DOD’s corrosion community. Corrosion Office officials have provided assistance to project managers for the submission of required reports on whether specific corrosion-control technologies are effective; however, project managers have not consistently followed DOD’s strategic plan regarding collecting and reporting information to verify whether all projects are achieving benefits other than the ROIs that were estimated in project plans. Without specific guidance to require that follow-on reports include details of measures of achievements other than ROI, including benefits, the Corrosion Office will be missing the opportunity to know whether equipment-related corrosion projects have achieved outcomes to prevent or mitigate corrosion. Further, the Corrosion Office has not consolidated information on projects’ status, such as whether a project was recommended for transition to military departments’ use or has been discontinued, and was unaware of which projects were discontinued. Without a mechanism or tool to assist in monitoring and consolidating status information about whether the technology or method demonstrated by each equipment-related corrosion project has transitioned to the military departments’ use, the Corrosion Office and the Corrosion Executives may not have timely information about whether the corrosion projects produced proven methods and products to prevent the corrosion of military equipment. Also, the Corrosion Office may not have a complete understanding of the success of projects if the military departments do not have specific guidance for reporting the ROIs of discontinued projects, and therefore report the ROIs in varying ways. Finally, DOD has not consolidated all lessons learned in a way that is available and accessible to all relevant personnel. Until a comprehensive, centralized, and secure database is developed that includes lessons learned from all completed corrosion projects, officials from DOD’s corrosion community will not have full and complete information on lessons learned, including proven methods or products to prevent or mitigate corrosion of military equipment. We are making four recommendations to improve DOD’s corrosion- prevention and control program: To enhance DOD in its oversight of the status and potential benefits of its equipment-related corrosion projects, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics require the Director, Corrosion Policy and Oversight Office, to Revise the DOD Corrosion Prevention and Mitigation Strategic Plan or other guidance to require that the military departments include in all follow-on reports the details of measures of achievement other than ROI, such as the features, results, and potential benefits of the project. To enhance tracking of DOD’s equipment-related corrosion projects, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics require the Director, Corrosion Policy and Oversight Office, to develop a tool or mechanism to assist in monitoring and consolidating the status information for each equipment-related corrosion project about whether the demonstrated technology or method has transitioned to military departments’ use. To ensure consistent reporting for all equipment-related corrosion projects, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics require the Director, Corrosion Policy and Oversight Office, to revise guidance to specify how project managers should report the ROI for discontinued projects. To enhance planning for corrosion prevention and mitigation, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics require the Director, Corrosion Policy and Oversight Office, to establish a time frame for completing the comprehensive and secure database so that all relevant officials of DOD’s corrosion community have access to the proven technology methods, products, and other lessons learned from all corrosion projects to prevent or mitigate corrosion of military equipment. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix III, DOD concurred with two of our four recommendations. DOD partially concurred with one recommendation, and based on additional information provided in its comments, we revised that recommendation. Finally, DOD did not concur with one recommendation. DOD concurred with our second recommendation that the Director, Corrosion Policy and Oversight Office, enhance tracking of DOD’s equipment-related corrosion projects by developing a tool or mechanism to assist in monitoring and consolidating the status information for each equipment-related corrosion project about whether the demonstrated technology or method has transitioned to military departments’ use. As DOD notes in its comments, the Corrosion Policy and Oversight Office will monitor transition status using the corrosion Engineering Resource Data Management (ERDM2) database program currently under development. According to DOD, ERDM2 is designed to collect, classify, and file data on all aspects of corrosion projects and to provide the DOD corrosion community access to information and tailored status reports. DOD concurred with our fourth recommendation that the Director, Corrosion Policy and Oversight, could enhance planning for corrosion prevention and mitigation by establishing a time frame for completing the comprehensive and secure database so that all relevant officials of DOD’s corrosion community have access to the proven technology methods, products, and other lessons learned from all corrosion projects to prevent or mitigate corrosion of military equipment. DOD stated in its comments that the development of the comprehensive and secure ERDM2 data-management tool is underway and is a high priority. According to DOD, development and deployment will occur incrementally and simultaneously to ensure that needs of all stakeholders are met. DOD anticipates that the initial phase of ERDM2 will contain data from completed projects and will be in place by December 31, 2013. DOD partially concurred with our third recommendation in the draft report that the Director, Corrosion Policy and Oversight Office, revise guidance to specify how the military departments’ Corrosion Executives and project managers should report the ROI for discontinued projects to ensure consistent reporting for all equipment-related corrosion projects. In partially concurring with this recommendation, DOD stated that the military departments’ Corrosion Executives do not actively execute projects or engage in the calculation of the ROI process, so the next revision of DOD’s Corrosion Prevention and Mitigation Strategic Plan will address only how project managers will calculate and report ROI on discontinued projects to the Director, Corrosion Policy and Oversight. While we found that the military departments’ Corrosion Executives review and coordinate through their respective chain of command to provide information on corrosion projects to the Director of the Corrosion Office, we agree that the military executives do not actively execute the corrosion projects or engage in the calculation of the ROI. Thus, we have revised the recommendation to include only the project managers. DOD did not concur with our first recommendation that the Director, Corrosion Policy and Oversight Office, revise the DOD Corrosion Prevention and Mitigation Strategic Plan or other guidance to require that the military departments include in all follow-on reports the details of measurements of achievement other than ROI, such as the features, results, and potential benefits of the project. In its response, DOD stated that the DOD Corrosion Prevention and Mitigation Strategic Plan currently provides sufficient guidance in this regard and believes it is not necessary to revise this guidance. DOD cited instructions in section 3, appendix D of the strategic plan about the 2 year follow-on reporting, which is to include a focus on assessing the ROI computed at project completion, as well as other features and benefits of the projects. Additionally, this appendix accompanying the strategic plan includes instructions on completing and submitting a checklist, also regarded as the follow-on report, to fulfill the requirements. We noted in our report that the checklist for the follow-on report that shows items to be reviewed on the status of the projects allows project managers to check “yes” or “no” for each item, but project managers are not required to write details about any benefits of the project. During our review, we found that about three-fourths of the completed checklists for the follow-on reports were modified by project managers on their own accord to include some measures of achievement of completed projects, such as when outcomes prompted changes to military equipment specifications and standards. However, one-fourth of the follow-on reports did not include information about features and benefits of completed projects. Specifically, we found that 8 of 30 follow- on reports contained little to no narrative detail because there was no requirement to do so. While DOD’s strategic plan provides instructions for the 2 year follow-on reporting, the plan with its accompanying instructions for completing the follow-on reports does not require that project managers include details about any benefits of the project. We maintain that DOD could enhance its oversight of corrosion projects by providing additional, specific guidance to require that follow-on reports include details of measures of achievements other than ROI, including project benefits, to allow the Corrosion Office to have additional information about whether equipment-related corrosion projects have achieved outcomes to prevent or mitigate corrosion. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps; the Director of the DOD Office of Corrosion Policy and Oversight; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has ensured the submission of required reports for equipment-related corrosion projects, we reviewed the DOD Corrosion Prevention and Mitigation Strategic Plan and its revised versions, and used the reporting milestones outlined in the plan to identify the types of reports required for each project. We originally received project documentation for 129 projects, from which we selected our sample. However, one project was eliminated because it was funded in fiscal year 2013. We obtained project information for 128 equipment-related corrosion demonstration projects funded by the DOD Corrosion Policy and Oversight Office (hereafter referred to as the Corrosion Office) for fiscal years 2005 through 2012. We requested and reviewed the project documentation—project proposals, final reports, and follow-on reports—to determine if the data and related reports met the Corrosion Office’s reporting requirements. For the purposes of our work in reviewing projects funded in fiscal years 2005 through 2010, we considered a final or follow-on report to be submitted as required if the Corrosion Office had a copy of the report in its records system, and confirmed the accuracy with the Corrosion Control and Prevention Executives (hereafter referred to as Corrosion Executives). We did not consider the timeliness of the submitted reports. We received project documentation through May 15, 2013. Additionally, for follow-on reports, we could assess only the projects funded in fiscal years 2005 through 2007 because the DOD strategic plan’s milestone requires submission of follow-on reports for completed projects within 2 years after the projects have been completed and transitioned to use within the military departments. We determined that the project-reporting data were sufficiently reliable for the purposes of determining the extent to which the military departments met the Corrosion Office’s reporting requirements. We did not assess elements of the actual report. We interviewed officials from the Corrosion Office, as well as the Army, Navy, and Air Force Corrosion Executives, to understand the process of what reports are required and when; challenges and limitations, if any, in completing the reports; and how projects are tracked if required reports have not been submitted. Further, we interviewed these officials to determine why the required reports were not submitted. Also, we determined what actions, if any, they planned to take to complete the reports. Moreover, we selected a nongeneralizable sample of 43 projects for further review and conducted an in-depth analysis of the projects selected. We selected the sample using a random systematic approach. We ordered the population first by service, then by fiscal year, location, and project manager. Next we selected a random starting point and then selected every third project. Our nongeneralizable, sample-selection methodology ensured selection of a variety of projects over all fiscal years, locations, and services. We used a semistructured interview tool to obtain information from project managers and project personnel to understand reporting requirements and time frames as well as challenges and limitations, if any, that they had in completing the reports. We also reviewed prior GAO work on DOD’s corrosion-prevention and mitigation program. To determine the extent to which DOD has collected the information needed to determine whether benefits and other measures have been achieved from equipment-related corrosion projects, we reviewed key documents, including DOD Instruction 5000.67 and DOD’s strategic plan. We examined DOD Instruction 5000.67 to gain an understanding of the roles and responsibilities to develop procedures for corrosion planning and implementation, and to review, manage, and document project results. We examined DOD’s strategic plan to gain an understanding of the department’s strategy to justify funding for corrosion projects by verifying the initial investment of corrosion projects and guidance on collecting information to check on any changes to the assumptions used in the initial estimated return on investment (ROI) in order to compare, or recompute, the ROI and determine whether the ROI is lower than expected, as expected, or better than expected. Finally, we examined guidance on internal controls to identify relevant responsibilities and practices that could be used as criteria. We reviewed all follow-on reports provided by the Corrosion Office and the military departments, which included 30 follow-on reports on projects funded in fiscal years 2005 through 2008, to determine whether the military departments have collected and reported measures of achievement of their completed corrosion projects other than ROI, such as when outcomes prompt changes in specifications, standards, technical manuals, and other reference or guidance documents. We compared the amount of detail provided in the follow-on reports. Additionally, we interviewed officials from the Corrosion Office as well as the military departments’ Corrosion Executives to understand whether and how they collect data in order to determine whether the estimated ROIs have been achieved. Additionally, from our nongeneralizable random systematic sample of 43 projects, we interviewed project managers and project personnel to gain an understanding of how they provide information on the status and the results of corrosion projects that have completed research and development, transitioned to a service’s use, and been in use for 2 years. Specifically, we interviewed these officials to understand how they verify the initial investment of corrosion projects, including what the project’s assumptions were, how the assumptions were tracked during the first few years of the project, and the extent to which the implementation affected the ROI recomputation. For projects that were still in the demonstration phase, or had just been transitioned to a service’s use, we interviewed the officials to understand their plans to collect information to verify the initial investment. To determine the extent to which DOD has tracked the status of equipment-related corrosion projects, we reviewed relevant law to understand legislative requirements, including a long-term strategy and a coordinated research and development program for the prevention and mitigation of corrosion for new and existing military equipment, which includes a plan to transition new corrosion prevention technologies into operational systems. Further, we examined DOD Instruction 5000.67 to gain an understanding of the department’s policy on the prevention and mitigation of corrosion on DOD military equipment as well as roles and responsibilities of the Corrosion Office and Corrosion Executives to collect, review, reassess, and distribute information on proven methods and products that are relevant to prevent corrosion of military equipment. We analyzed documentation for each of the 43 projects in our sample, specifically reviewing the project plans, final reports, and follow-on reports, to analyze variables, including assumptions, initial estimated ROI and the reassessed ROI, recommendations to transition to service use, project status, and benefits and outcomes other than the ROI, among others. We interviewed Corrosion Office officials to determine what status information is collected for each project, how such information is consolidated, and what analysis is done to oversee the status and outcomes of each project. Likewise, we interviewed Corrosion Executives to determine their approach to collect, review, reassess, and distribute information on proven methods and products that are relevant to prevent corrosion of military equipment. Specifically we interviewed these officials to gain an understanding on how project results were reviewed, managed, and documented. To determine the extent to which DOD has identified, shared, and incorporated lessons learned from equipment-related corrosion projects into future planning to prevent or mitigate corrosion, we reviewed key documents, including relevant law to understand legislative requirements, and DOD policy and guidance. For example, we examined DOD Instruction 5000.67 to understand the department’s policy to ensure a focused and coordinated approach throughout DOD to collect, review, reassess, and distribute information on relevant proven methods and products. We also examined DOD’s strategic plan to understand the department’s guidance on using rapid and effective web-based strategies for communicating and sharing best practices, capturing corrosion-related technical information across the services, and to determine the requirements for lessons learned to be incorporated into project documentation, specifically the final report. Finally, we examined guidance on internal controls to identify relevant responsibilities and practices that could be used as criteria. We analyzed all final reports to determine whether lessons learned were being included and the extent to which they were being incorporated into future planning and guidance. We interviewed Corrosion Office officials to learn about their efforts to develop a centralized database for project information that included lessons learned. We interviewed Corrosion Executives and their staffs to learn about how lessons learned are shared and incorporated. Additionally, from our nongeneralizable random systematic sample of 43 projects, we interviewed project managers and project personnel to gain an understanding of how lessons learned are collected, documented, shared, and incorporated into future corrosion planning. Specifically, we interviewed these officials to gain an understanding of what data are collected and how they are analyzed, archived, and disseminated across the department. We visited or contacted the following offices during our review: Office of Corrosion Policy and Oversight Air Force Corrosion Control and Prevention Executive Air Force Corrosion Prevention and Control Office, Robins Air Force Air Force Materiel Command, Air Force Research Laboratory, Wright- Patterson Air Force Base, Ohio Army Corrosion Control and Prevention Executive Army Research Lab, Aberdeen Proving Ground, Maryland Aviation and Missile Command Corrosion Program Office, Redstone Corpus Christi Army Depot, Texas Office of the Assistant Secretary of the Army, Acquisition, Logistics Tobyhanna Army Depot, Redstone Arsenal, Alabama U.S. Army Armament Research, Development and Engineering Center, Picatinny Arsenal, New Jersey U.S. Army Tank-Automotive Research, Development and Engineering Navy Corrosion Control and Prevention Executive Naval Surface Warfare Center, Carderock Division Navy Corrosion Control and Prevention Executive Naval Air Systems Command, Patuxent River Naval Air Station, Naval Sea Systems Command We conducted this performance audit from July 2012 through September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Defense Corrosion Prevention and Mitigation Strategic Plan includes the template of the follow-on or project review checklist for project managers to document the reassessed return on investment and other features and benefits of the equipment-related corrosion projects. In addition to the contact named above, Carleen Bennett, Assistant Director; Clarine Allen; James Ashley; Laura Czohara; Mark Dowling; Linda Keefer; Charles Perdue; Carol Petersen; Richard Powelson; Amie Steele; and John Van Schaik made key contributions to this report. Defense Infrastructure: DOD Should Improve Reporting and Communication on Its Corrosion Prevention and Control Activities. GAO-13-270. Washington, D.C.: May 31, 2013. Defense Management: Additional Information Needed to Improve Military Departments’ Corrosion Prevention Strategies. GAO-13-379. Washington, D.C.: May 16, 2013. Defense Management: The Department of Defense’s Annual Corrosion Budget Report Does Not Include Some Required Information. GAO-12-823R. Washington, D.C.: September 10, 2012. Defense Management: The Department of Defense’s Fiscal Year 2012 Corrosion Prevention and Control Budget Request. GAO-11-490R. Washington, D.C.: April 13, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington, D.C.: December 16, 2010. Defense Management: DOD Has a Rigorous Process to Select Corrosion Prevention Projects, but Would Benefit from Clearer Guidance and Validation of Returns on Investment. GAO-11-84. Washington, D.C.: December 8, 2010. Defense Management: Observations on Department of Defense and Military Service Fiscal Year 2011 Requirements for Corrosion Prevention and Control. GAO-10-608R. Washington, D.C.: April 15, 2010. Defense Management: Observations on the Department of Defense’s Fiscal Year 2011 Budget Request for Corrosion Prevention and Control. GAO-10-607R. Washington, D.C.: April 15, 2010. Defense Management: Observations on DOD’s Fiscal Year 2010 Budget Request for Corrosion Prevention and Control. GAO-09-732R. Washington, D.C.: June 1, 2009. Defense Management: Observations on DOD’s Analysis of Options for Improving Corrosion Prevention and Control through Earlier Planning in the Requirements and Acquisition Processes. GAO-09-694R. Washington, D.C.: May 29, 2009. Defense Management: Observations on DOD’s FY 2009 Budget Request for Corrosion Prevention and Control. GAO-08-663R. Washington, D.C.: April 15, 2008. Defense Management: High-Level Leadership Commitment and Actions Are Needed to Address Corrosion Issues. GAO-07-618. Washington, D.C.: April. 30, 2007. Defense Management: Additional Measures to Reduce Corrosion of Prepositioned Military Assets Could Achieve Cost Savings. GAO-06-709. Washington, D.C.: June 14, 2006. Defense Management: Opportunities Exist to Improve Implementation of DOD’s Long-Term Corrosion Strategy. GAO-04-640. Washington, D.C.: June 23, 2004. Defense Management: Opportunities to Reduce Corrosion Costs and Increase Readiness. GAO-03-753. Washington, D.C.: July 7, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003.
According to DOD, corrosion can significantly affect the cost of equipment maintenance and expected service life of equipment. Corrosion affects military readiness by taking critical systems out of action and creating safety hazards. GAO was asked to review DOD's military-equipment corrosion-prevention and mitigation projects. In this report, GAO addressed the extent to which DOD has (1) ensured the submission of required reports for equipment-related corrosion projects; (2) collected the information needed to determine whether benefits and other measures have been achieved from equipment-related corrosion projects; (3) tracked the status of equipment-related corrosion projects; and (4) identified, shared, and incorporated lessons learned from equipment-related corrosion projects into future planning to prevent or mitigate corrosion. To conduct this work, GAO reviewed DOD policies and plans and met with DOD corrosion officials. The Department of Defense (DOD) has invested more than $63 million in 88 projects in fiscal years 2005 through 2010 to demonstrate new technology or methods addressing equipment-related corrosion. DOD's Office of Corrosion Policy and Oversight (Corrosion Office) has collected a majority of required final and follow-on reports on the results of equipment-related corrosion projects and is taking steps to obtain outstanding reports. As of May 2013, GAO found project managers had submitted final reports for 55 of the 88 projects (about 63 percent) funded in fiscal years 2005 through 2010 and submitted follow-on reports for 27 of the 41 projects (about 66 percent) funded from 2005 through 2007. DOD requires the military departments to collect and report to the Corrosion Office key information from equipment-related corrosion projects about new technologies or methods; however, DOD does not have complete information about the benefits of all projects. GAO found that the military departments inconsistently reported measures of achievement other than the return on investment (ROI), such as when outcomes prompted changes to military equipment specifications. Further, the military departments did not always collect required information needed to recompute the estimated ROI and were unable to determine whether projects had achieved their estimated ROI. Corrosion Office officials plan to revise guidance on how project managers should be reassessing the ROI. Without specific guidance to require that follow-on reports include details of measures of achievement other than ROI, the Corrosion Office will be missing the opportunity to know whether equipment-related corrosion projects have achieved outcomes to prevent corrosion. DOD has taken steps to improve oversight of its equipment-related corrosion projects, such as revising its DOD Corrosion Prevention and Mitigation Strategic Plan to provide additional guidance on reporting requirements. However, DOD does not have a comprehensive overview of the status of all equipment-related corrosion projects. While the reports provide the status for each project, GAO found that the Corrosion Office does not consolidate information to monitor the status of all these projects, such as if a project has not transitioned to service use or has been discontinued. Further, GAO found that project managers vary in how they reported the ROI for discontinued projects. Without a mechanism to consolidate projects' status to facilitate monitoring and guidance for reporting ROIs for discontinued projects, the Corrosion Office and the military departments may not have timely information of whether the corrosion projects produced proven methods and products to prevent the corrosion of military equipment. DOD has identified and incorporated lessons learned from equipment-related corrosion projects and shared some lessons with the corrosion community; however, DOD has no centralized and secure database or other source to share lessons from all project reports, including those with sensitive information. While DOD has begun to develop a database that would contain lessons learned on all projects, development is in the early stages, and DOD is unsure when it will be completed. Until a comprehensive, centralized, and secure database is developed that includes lessons learned from all completed projects, officials from DOD's corrosion community will not have full and complete information on lessons learned, including proven methods or products to prevent or mitigate corrosion of military equipment. GAO recommends four actions to improve the oversight of DOD’s corrosion-prevention and control program. DOD concurred with two recommendations, partially concurred with one, and did not concur with one. DOD plans to develop a database to collect data and lessons learned on corrosion projects and to revise guidance on how to report the ROI for discontinued projects. DOD did not agree that guidance should be revised to ensure military departments consistently report projects’ benefits. GAO maintains that this recommendation is warranted for project oversight.
Foreign language needs have significantly increased throughout DOD and the federal government with the presence of a wider range of security threats, the emergence of new nation states, and the globalization of the U.S. economy. The difficulties in maintaining sufficient foreign language capabilities among federal agencies and departments have been identified as a serious human capital problem for some time. The entire military has faced shortfalls in language capability in recent operations, such as difficulties in finding sufficient numbers of qualified language speakers during peacekeeping operations in the Balkans and combat actions in Afghanistan. In recent reports, we have stated that shortages of staff with foreign language skills have affected agency operations and have hindered U.S. military, law enforcement, intelligence, counterterrorism, and diplomatic efforts. The U.S. Special Operations Command faces similar challenges in managing its SOF language training to maintain sufficient language capability to support its missions. For example, (1) it is common for SOF personnel to have received language training in more than three languages during their career; (2) SOF units often operate in geographic regions where there are numerous languages, (3) high operational demands and force structure limitations often require SOF personnel to operate in areas where their specific foreign language(s) are not spoken, and (4) it is difficult to determine the right languages and personnel mix to address a wide variety of unknown and hard-to-forecast small-scale conflicts. The U.S. Special Operations Command established its SOF Foreign Language Program in 1993 to provide combatant commanders with SOF individuals and units that have the required foreign language proficiency to meet current and future operational requirements. The command designated the U.S. Army Special Operations Command, at Fort Bragg, North Carolina, as the proponent in all matters related to training, policies, programs, and procedures for SOF language requirements and capabilities. In 1998, the Army Command established the Special Operations Forces Language Office at Fort Bragg. Currently located in the command’s training directorate, the office is responsible for providing technical oversight and developing, coordinating, and executing foreign-language- training strategies for active-duty, reserve, and National Guard SOF personnel within the three service components: the U.S. Army Special Operations Command, the U.S. Naval Special Warfare Command, and the U.S. Air Force Special Operations Command. The office is also responsible for running the Army’s SOF foreign language program. The Navy and Air Force SOF components are responsible for managing their own language-training programs. The foreign language program provides training for more than 12,000 SOF military personnel (about 28 percent of all 43,671 SOF personnel) who are required to acquire some level of proficiency in one or more foreign languages. Of these, about 90 percent (10,833) are in the U.S. Army Special Operations Command; more than half of them are in Army Reserve or National Guard units. (See table 1.) The remaining 10 percent of SOF personnel with language needs are in the U.S. Naval Special Warfare Command (1,128) and U.S. Air Force Special Operations Command (155).The training consists of initial acquisition (becoming proficient in a new language), sustainment (maintaining a proficiency), and enhancement (raising a proficiency). It also includes a basic orientation to the customs and cultures of world regions where their languages are used. SOF personnel require foreign language skills in most of the special operations forces’ core tasks, such as unconventional warfare, counterterrorism, counter proliferation of weapons of mass destruction, civil affairs, psychological operations, information operations, and foreign internal defense. The command, in coordination with the organizations for which it provides forces, determines the languages, levels of proficiencies, and number of language-qualified personnel needed in its units through an assessment of the operational needs of the geographic unified commands.Currently, SOF has requirements in more than 30 foreign languages, such as Chinese Mandarin, Modern Arabic, Indonesian, Korean, Persian-Farsi, Russian, and Spanish. In contrast with other intelligence or diplomatic foreign language training, SOF training places greater emphasis on oral communication skills (speaking and listening) than on nonverbal skills (reading and writing) in order to give SOF personnel the ability to communicate during operations in the field. The level of proficiency that needs to be achieved varies by unit and mission and can range from limited skills necessary to understand and utter certain memorized phrases for immediate survival to more intermediate skills (e.g., the ability to deal with concrete topics in past, present, and future tenses) necessary to meet routine social demands and limited job requirements. For example, the Army’s Special Forces units (active-duty and National Guard), which account for about half of the Army personnel with a language requirement, generally need only a limited command of the language for immediate survival needs. Personnel who conduct psychological operations, foreign internal defense, and civil affairs missions generally need higher proficiency skills because of their greater contact and interaction with local civilians and military personnel. Although higher proficiency levels are desired, language is only one, and often not the highest, priority of the many skills that SOF personnel must acquire and maintain to effectively conduct their missions. Appendix II provides information on language proficiency levels and requirements. The special operations forces foreign language program is funded directly through the command’s annual budget. Funding for the program amounted to $9.5 million and $10.2 million in fiscal years 2002 and 2003, respectively, and it is projected to be $11.1 million in fiscal year 2004. The command provides portions of the program’s funding to each service component command to pay for its own respective foreign language training activities and to SOFLO to manage the program. The program’s funding constitutes a very small portion of the command’s annual budget, which is projected to be about $6.7 billion in fiscal year 2004. The command and SOFLO have taken several recent actions to begin addressing a number of long-standing problems in delivering and managing foreign language training to special operations forces. However, these actions are being taken without the benefit of a cohesive management framework, which incorporates strategic planning (a strategy and strategic plan with associated performance plans and reports), that would guide the program, integrate its activities, and monitor its performance. Such an approach would help the program maintain its present momentum, better manage its human capital challenges, and meet the language-training needs of SOF personnel as they take on new roles and responsibilities. The command and SOFLO are taking several actions that begin to strengthen the foreign-language-training program for SOF forces. These actions include consolidating all language training under a single contractor, completing a long overdue assessment of language requirements, improving communication and coordination with all program stakeholders, developing a database to monitor language proficiencies and training, and looking for ways to make use of other foreign-language-training assets. According to a SOFLO official, these actions have been initiated in part by the command’s increased attention since September 11, 2001, to issues involving SOF language capabilities necessary to carry out core missions. For many years, the SOF foreign-language-training program’s service components and their units acquired language training through multiple contractors, encompassing a variety of private companies and universities. According to command officials, this practice led to inconsistencies in the type and quality of training, the response to meeting new or changing language requirements, and the way language training was acquired by individual service components. Various contractors used different instruction methods, and their training materials varied in quality. In September 2002, the command awarded all of its commercial language training to a single contractor, B.I.B. Consultants. Command officials told us that the new 5-year contract provides for greater standardization and a more consistent approach to language training and improves the way language training services are acquired throughout the command. Specifically, the new contract offers a universal, standardized training curriculum, an ability to customize instruction to meet specific needs; a way to attain language proficiencies faster; and a consistent monitoring of instruction and individual performance. The contractor, a business franchise of Berlitz International, plans to use its parent’s worldwide resources to provide SOF personnel with a variety of instruction services (such as classroom instruction, tutoring, and total immersion training in a live or virtual environment). Command officials also believe that the instruction method used by the contractor offers a way for SOF personnel to attain proficiency faster. To fully realize the benefits of the new contract, the command has required each of its service components and their units to use the contract to meet all their language-training needs, except when they take advantage of other government language resources, such as the Defense Language Institute. Some of the B.I.B. contract costs are higher than those in previous contracts because the command awarded the new contract on the basis of “best value” and gave management and technical factors higher consideration than price. A SOFLO official estimated that the annual contract cost is currently about $5.5 million to $6 million. If this figure remains the same each year, the total cost of the 5-year contract is projected to be about $30 million. A SOFLO official said that the total amount could be higher if SOF service components utilize more of the contract’s language services. This could happen as the service components and their units become more familiar with the contract services and as more SOF personnel return from current deployments and are able to access language training. The official also said that some costs are higher than those in prior contracts for such language-training services as total immersion, in which students practice a language while living in another country or in a language-controlled isolated environment. Command officials believe the improved quality and delivery of language training outweigh any increased cost. B.I.B. Consultants appears to be meeting the expectations, including having its beginning language students meet their proficiency goals, set out in its contract with the command. At the command’s initial quarterly contract review in March 2003, which covered the first 5 months of implementation, command and contractor officials focused on provisions in the contract and on procedural aspects, such as scheduling training, providing materials, and developing contacts. Command officials brought up several issues largely related to the cost and implementation of immersion training, classroom requirements for instructors and materials, and the delivery of tactical language training. On the basis of discussions among attendees and our observations at the review, none of the issues discussed appeared irresolvable, and most of them could be addressed by improved communications and more experience in understanding and executing the contract. For example, B.I.B. officials agreed to work with the service components to find ways to reduce some immersion training costs. A second contract review was held in August 2003. According to SOFLO, each of the command’s service components is using the language services provided under the B.I.B. contract, and the results from some initial acquisition classes indicate that students are achieving most of the proficiency goals. A B.I.B. contract manager told us that the company believes it is successfully implementing the provisions of its contract. The official said that B.I.B. Consultants and Berlitz International had formed a joint team in October 2002 to manage all contract operations necessary to provide the full range of training services requested by the government. The official said that B.I.B. had successfully delivered the services requested through July 2003 and had promptly addressed the few issues (e.g., higher costs for immersion training and the quality of some materials) that arose. Appendix III provides additional information on the status of the contract’s implementation at the command’s service components and our analysis of the preliminary results of the students’ performance under the new contract. In another action, the command is nearing the completion of a long-overdue assessment of its SOF foreign language requirements. The assessment is based on the operational requirements identified by the command in conjunction with the geographic unified commanders. It validates the languages, proficiency levels, and number of positions in each SOF unit that are needed to conduct special operations missions. The assessment is used by the SOF service components and SOFLO to determine future language-training requirements. Although such assessments are supposed to be conducted at least every 2 years, this is the first commandwide assessment since 1997. Command officials expect the assessment to be approved by the fall of 2003. SOFLO is in the process of expanding its communications and coordination with all of the stakeholders that are involved in delivering language training to SOF personnel. According to officials at the Navy and Air Force SOF components, the Defense Language Institute, and DOD headquarters, SOFLO officials have recently increased their contacts and visits with them to discuss language issues and ways to improve coordination. In addition, in December 2002, SOFLO reinstituted an annual language conference, which had not been held since 1997, that is designed to serve as a forum where SOF language issues can be discussed and resolved. Conference attendees included command representatives from headquarters and the service components and guests from the intelligence, academic, and other language-using communities who were invited to gain an appreciation of the differences between SOF requirements and other DOD language organizations and obtain their perspectives. SOFLO held another conference in August 2003. SOFLO also has recently developed an Internet-based Web site to provide information on SOF language training, including schedules of courses and other training opportunities; links to the latest directives, policies, and procedures; training help-aids; points of contacts; upcoming events; and information about the B.I.B. contract and other language resources. Although some difficulties remain with providing all SOF personnel with full access to the Web site, a SOFLO official told us that the Web site should help increase the program’s visibility and provide information about the command’s language training. Several Navy, Air Force, and command officials we talked with said that, over the years, SOFLO’s attention has focused largely on Army SOF language issues and has paid less attention to the Navy and Air Force language programs. These officials said that SOFLO’s recent efforts to increase its visits and contacts, hold an annual conference, and develop other communication tools should help to bring more balance and an increased “joint” focus to the program. Also, Defense Language Institute officials stated that the increased contacts between their organization and SOFLO would allow the institute to better understand SOF language needs and determine how it could best support the program. SOFLO is developing a central, standardized database to capture information on the language training and proficiency status of SOF personnel and to assess language capabilities across the services. A SOFLO official said that full implementation of the database is critical because there is currently no centralized commandwide system to track or access information related to language readiness or training. Service components and their units will be responsible for updating their portion of the data each quarter. In the future, SOFLO plans to develop a Web-based, data-entry capability to make updating easier and more user friendly. While most language-training needs are met by the new B.I.B. contract, SOFLO is exploring ways to expand its use of other national language resources to complement and provide additional support for its program. Such language assets can offer training and technology capabilities that are not available in the SOF program, include the following: The Defense Language Institute, which is DOD’s primary source of language instruction, has developed tactical language help-aids (e.g., pocket cards with key phases and words) that can be used to support language needs during military operations. The institute also provides real-time video language instruction for many military facilities around the world and is developing other distance/distributive-learning capabilities. Several SOF unit personnel told us that they value the institute’s resident training and would attend if their time allowed it. The Satellite Communications for Learning (SCOLA) broadcast network’sprogramming provides access to most world languages, including less common languages that are not often taught in the United States. By watching and listening, students are able to actually experience the foreign culture and develop their language skills in a native real-life environment. The broadcasts also provide significant insight into the internal events of the various countries. The SOF unit personnel we spoke with said that the network helps students sustain language skills, learn dialects, and improve cross-cultural understanding. SCOLA officials told us that over the next 5 years, they plan to increase the programming, provide Internet delivery of services, improve their infrastructure to better respond to special program requests, and develop on-demand digital video archiving of past programs. The Defense Advanced Research Projects Agency is developing new technologies to improve language translation capabilities. These include hand-held devices that provide limited real-time, face-to-face speech translation in the field. These devices initially were developed for users involved in medical first-response, force-protection, and refugee-reunification missions. SOF personnel used some of these devices during the recent Afghanistan operations. While not a substitute for individual language skills, these new technologies help bridge some language gaps in the field. While these ongoing actions begin to improve and strengthen the foreign language program, SOFLO is implementing them without the benefit of a cohesive management framework that incorporates strategic planning (a strategy and strategic plan with associated performance plan and reports). According to a command directive, SOFLO is responsible for developing a long-range SOF language acquisition strategy. Although SOFLO has drafted a document outlining a strategy, this has not yet been approved. A SOFLO official told us that the strategy is expected to be issued by the end of 2003. Strategic planning is essential for this type of program because it provides the tools for applying good management practices. Such tools include a statement of the program’s results-oriented goals and objectives, the strategy it will use to achieve those goals and objectives, including key milestones and priorities, and the measurements (both quantitative and qualitative) that it will use to monitor and report on its progress, identify necessary corrective actions, and better manage risk. These tools also provide a mechanism to better align, establish clear linkages, and assign roles and responsibilities in the organizational structure and determine the program resources needed. Such planning requires top leadership support and, if done well, is continuous, involves all program stakeholders, and provides the basis for everything an organization does each day to support the achievement of its goals and objectives. Using strategic planning for SOF’s foreign language program would also be consistent with the general management principles set forth in the Government Performance and Results Act of 1993, which is the primary legislative framework for strategic planning in the federal government. In our prior reports and guidance, we have also emphasized the importance of integrating human capital considerations into strategic planning to more effectively plan and manage people’s needs and to address future workforce challenges, such as investments in training and developing people. We recently released an exposure draft that outlines a framework consisting of a set of principles and key questions that federal agencies can use to ensure that their training and development investments are targeted strategically. Additionally, the Office of the Secretary of Defense, in recognition of the need for a more strategic approach to human capital planning, published the Military Personnel Human Resources Strategic Plan in April 2002 to establish military personnel priorities for the next several years. Strategic planning—a strategy and strategic plan with an associated performance plan and reports—would ensure that good management principles are being used to manage the program and achieve the results-oriented goals and objectives established for the program. Aligning this planning with DOD’s overall human capital strategy would further ensure that the pervasive human capital challenges facing the SOF foreign language program are considered in the broader context of overall DOD military personnel priorities. Without such a cohesive management framework, the program may lose its current momentum, and it may be unable to meet the new language-training needs that SOF personnel are likely to have as they take on expanded roles and responsibilities in counterterrorism and other military operations. The SOF foreign-language-training program continues to face ongoing challenges that limit the access that special operations forces have to take advantage of language-training opportunities. These challenges include more frequent and longer deployments for active-duty, reserve, and guard units. In addition, Army Reserve and National Guard members face further hurdles in getting access to training because of their geographic dispersion and part-time status. These members also receive lower monetary incentives for achieving required proficiencies and fewer training opportunities than active-duty members. Greater reliance on SOF personnel in combating terrorism may increase these challenges. Recognizing the underlying problems of access, SOFLO has begun looking into nontraditional training methods, such as distance/distributive- learning tools, including tools that provide on-demand “anytime anywhere” language training. But program officials are still at an early stage in their evaluations. Acquiring and maintaining a proficiency in a foreign language takes continuous practice and, because it is a highly perishable skill, it can deteriorate rapidly without such practice. As a result, SOF personnel need to have a wide range of options to gain access to language-training resources at anytime and anywhere they are stationed or deployed. However, the SOF language program is facing several challenges that affect accessibility to language training. In recent years, both active-duty and reserve/guard SOF personnel have had less time for overall training because they have been deployed more frequently and for longer periods of time. In addition, when they have had time to train, their language training has often competed with other higher-priority training needs, such as marksmanship or nuclear-biological-chemical training. As a result, they have often been unable to complete the necessary language training to reach required proficiencies and to take the necessary tests to qualify in their respective language(s). Furthermore, Army Reserve and National Guard soldiers, who make up more than half of the total number of SOF personnel requiring language proficiency,face additional hurdles in finding time and gaining access to language training. These soldiers are spread across 28 states and are often located at long distances from their unit’s facilities, making it difficult to get to centrally located training resources. In addition, they have fewer days available for training because of their part-time status. Moreover, because of their part-time status, Army Reserve and National Guard soldiers have lower monetary incentives to undertake language training than do active-duty personnel. According to SOFLO, active-duty Army SOF personnel receive foreign language proficiency pay, for example, of $100 each month if they attain a language proficiency level of 2. By contrast, Army Reserve and National Guard personnel get $13.33 each month if they attain the same proficiency because their proficiency pay is prorated according to the number of days they train.Many of the more than 50 Army Reserve and National Guard soldiers we spoke with said that, despite the hurdles, they often undertake language training on their own time because of the value they place on foreign language skills in conducting their missions. They added that higher proficiency pay allowances would give them more incentive to study language and improve their proficiencies. In its May 2002 report, DOD’s Ninth Quadrennial Review of Military Compensation recommended that the services be authorized to pay their reserve and guard members the same monthly amount as active-duty members for maintaining proficiency in designated critical languages in order to provide consistency in the application of special pay between reserve and active-duty members. Additionally, a SOFLO official told us that current pay and allowancefunding levels for Army Reserve and National Guard units do not allow units to send more soldiers to language courses at the command’s language schools and unit programs and Defense Language Institute. The official said that this issue may become more of a concern in fiscal year 2004, when the U.S. Army Recruiting Command will no longer fund the pay and allowance for initial-entry reserve soldiers going into civil affairs and psychological operations positions to attend the Defense Language Institute. The official said, however, that these proficiency pay and funding issues are not limited to foreign language training but are broader DOD issues that affect reserve and guard personnel throughout the military. These access constraints have prevented large numbers of SOF personnel from getting the necessary training (both initial and sustainment training) and taking the annual tests that are necessary to qualify in their language(s). As table 2 shows, for the quarter ending in March 2003, more than 11,200 SOF personnel, or 93 percent of the 12,116 of those who had a language requirement, needed to take either initial or sustainment training. According to a SOFLO official, these statistics may be higher than usual because of recent deployments to the Middle East and because of some administrative underreporting. Earlier quarters in 2002 show that about 75 percent of SOF personnel required training. As table 2 also indicates, most of the training needs for Navy SOF personnel were for initial language acquisition (83 percent of 1,128), while for Army and Air Force SOF members, the training needs were primarily for sustainment (85 and 64 percent, respectively). In reflection of this trend, the number of SOF personnel who have taken a proficiency test and have qualified in their respective language(s) within the last 12 months is low. As table 3 shows, in every subsequent quarter since the quarter ending September 2002, less than 25 percent of all Army, Navy, and Air Force SOF personnel with language requirements have been tested within the last 12 months and have met or exceeded the required proficiency to qualify in their respective language(s). This percentage decreased in the subsequent quarters. While acknowledging some administrative underreporting of data, a SOFLO official attributed the low qualification levels to the longer and more frequent deployments that hinder SOF personnel from getting the training they need to take and pass the language tests. The official said that the goal for proficiency varies by unit but that the units’ goals—having the total percentage of personnel in the unit meet the language requirement—in the command’s draft foreign language strategy for the largest groups of SOF personnel requiring language skills are 80 and 50 percent for U.S. Army Special Operations Command active-duty and reserve component units, respectively. The proficiency goal for U.S. Naval Special Warfare Command and U.S. Air Force Special Operations Command units is 50 percent. According to a SOFLO official, the number of SOF personnel annually tested in their respective language(s) could be increased if more certified oral testers were available to administer the Oral Proficiency Interview,the scheduling of these tests was more flexible, and the services allowed greater use of these tests for language(s) qualification. While most SOF personnel qualify in their languages by taking the Defense Language Proficiency Test, an Oral Proficiency Interview can also be used when the Defense Language Proficiency Test is not available in a given language. The SOFLO official stated that SOF prefers the oral test when it can be used because of the importance placed on verbal skills in conducting SOF missions. However, the certified oral testers, who are normally members of the Defense Language Institute’s teaching staff, are sometimes unavailable because they are teaching or doing other primary duties. Coordinating the schedules of the institute’s staff and the SOF members to conduct the tests is also difficult. For example, while reserve and guard members are primarily available to take the tests on weekends during their unit’s drill time, it is not always possible for the institute to schedule the two testers that are required to administer the test in a given language during that same time. Additionally, the SOFLO official stated that a draft Department of the Army language regulation would allow use of the oral test even if a Defense Language Proficiency Test exists for a given language. The official said that SOFLO is working with the Navy and the Air Force to make similar changes to their language regulations. As DOD places greater emphasis on the capabilities of special operations forces, especially those related to counterterrorism, command officials told us that these forces are unlikely to experience any change in the frequency or length of their deployments. Although command officials said they are still unsure about the impact of these changes on SOF language needs, the problems of access are likely to continue. According to SOFLO officials, some of accessibility challenges may be addressed by the development or expanded use of distance/distributive- training tools, such as Internet-based training, multimedia technologies, and SCOLA foreign language broadcasts. While the new B.I.B. contract provides additional flexibility and training options, it focuses primarily on traditional methods of delivering language training, such as classroom training, one-on-one tutoring, and total-immersion training. This type of live, person-to-person instruction is the preferred method for most language learning. However, distance/distributive-learning tools, particularly those tools that deliver on-demand “anytime, anywhere” training, offer options that can be effectively adapted to the training needs of SOF personnel. Distance/distributive learning encompasses a wide range of delivery methods, including video tele-training, computer conferencing, and correspondence courses. In recent years, DOD has sought to develop the next generation of distance/distributive learning—advanced distributed learning—which expands the range of options for providing DOD personnel with access to high-quality education and training, tailored to individual needs and delivered cost-effectively, whenever and wherever it is required. Advanced distributed learning includes Internet-based instruction, simulation, integrated networked systems, and digital knowledge repositories. DOD’s March 2002 Training Transformation Strategy emphasizes the use of such learning methodologies to ensure that training is readily available to both active and reserve military personnel, regardless of time and place. Table 4 shows the continuum of learning delivery methods from classroom to advanced distributed learning. SOFLO officials have begun evaluating some of the distance/distributive- learning options for language training that DOD has been developing for its own language-training programs. They told us that some of these efforts might be adaptable to the SOF program, as shown in the following: The Defense Language Institute, in collaboration with the National Cryptologic School, Foreign Service Institute, and the National Foreign Language Center, are developing an Internet-based learning support system, called LangNet, which provides language learners and teachers with access to on-line language materials. The Defense Language Institute is also expanding its video tele-training capabilities to provide students located throughout the world with real-time language instruction. The U.S. Army Intelligence Center at Fort Huachuca, Arizona, is leading an initiative called Broadband Intelligence Training System, or BITS, to use commercial broadband technology as a way to provide individuals with Internet-based tele-training at the unit or at home. SOFLO officials believe that this distance-learning tool shows the promise of delivering on-demand courseware in various languages with minimal technology requirements and being effective for initial acquisition training. The Defense Advance Research Projects Agency is developing a language-training simulation, which may be useful when speech recognition software hurdles are resolved. SOFLO also wants to expand the availability of individual multimedia tools, e.g., CD-ROM and DVD media and players, so that SOF personnel could use such tools at any location. Additionally, the Army’s John F. Kennedy Special Warfare Center and School at Fort Bragg, North Carolina, is developing computer-based language courses that can be accessed through an Army learning site or through correspondence. Distributive learning was the principal theme of the command’s annual SOF language conference in August 2003, and SOFLO provided attendees with information on various language-oriented initiatives. A SOFLO official told us that distance/distributive-learning approaches are most beneficial for providing individuals who already have some language proficiency with sustainment training or enhancement training. While useful, these approaches are often not considered the best options for those individuals who need initial acquisition language training where person-to-person interaction is most desired. The official said that SOFLO is still in the early stages of evaluating and determining which distance/distributive-learning options are best suited to its program and the resources it will need to incorporate them into its program. While the U.S. Special Operations Command has taken several recent actions to begin improving the delivery of language training and the management of its foreign language program, these actions have been taken without the benefit of a cohesive management framework combined with strategic planning tools. At the forefront of the recent actions is a major shift in the way that the program provides language training for active-duty, reserve, and guard SOF personnel in the Army, Navy, and Air Force. Rather than using multiple contractors, the command has consolidated all of the training under a single contractor to provide a standardized curriculum and standardized training materials, more flexible delivery mechanisms, and consistent monitoring of student and teacher performance. These ongoing management actions address a wide range of issues, including the need for more coordination and communication within the program, the creation of a database to track language proficiencies and training requirements, and better utilization of other national language assets. However, because the program has not yet issued a strategy and developed the necessary strategic-planning tools (a strategic plan with an associated performance plan and reports) to carry it out, the value and impact of these disparate actions on the program as a whole is difficult to evaluate. As a first step, the command could issue a strategy for meeting SOF language requirements to establish its vision for language training across the command. As a second step, the command could use the strategic vision to develop necessary strategic planning tools to guide the program in the future. Such strategic planning with the support of top leadership would allow the program to determine what actions are needed to meet its overall goals and objectives; ensure that these actions are well integrated with each other; identify key target dates, priorities, and the resources needed to undertake them; develop performance measures to assess their progress and effectiveness; identify corrective actions; and better manage risk. It also should be aligned with DOD’s overall human capital efforts to more effectively address its personnel challenges. Without a cohesive management framework based on strategic planning, the program risks losing the momentum it has achieved so far and risk failing to meet the growing needs of special operations forces for increasingly critical foreign language skills. Despite continuing challenges in accessing training, the development of distance/distributive learning promises to offer SOF personnel greater access to language resources. While SOF personnel are often unable to take advantage of traditional, instructor-based language training because of long deployments and geographical dispersion, they could benefit from distance/distributive-training approaches that offer more flexibility and accessibility to language training, including on-demand, “anytime, anywhere” options. The use of distance/distributive learning would also provide a good complement to the training services offered by the command’s new contract. The command has an opportunity to support several promising DOD distance/distributive-learning initiatives under way with participation and resources. Also, DOD could consider expanding the use and availability of oral proficiency interview testing to provide additional opportunities for SOF personnel to test and qualify each year in their respective language(s). DOD could also consider changing the amount paid to Army Reserve and National Guard soldiers for foreign language proficiency to provide additional incentive for them to maintain and improve their language skills and provide more pay and allowance funds for these soldiers to allow more to attend language schools and pursue other venues for language training. Such changes might be a way to provide greater assurance that Army Reserve and National Guard soldiers take advantage of current language training and training that becomes available through the use of distance/distributive learning. To strengthen the management and delivery of foreign language training for special operations forces, we recommend that the Secretary of Defense direct the Commander of the U.S. Special Operations Command to adopt a strategy for meeting special operations forces’ foreign language requirements and develop the necessary strategic-planning tools (a strategic plan with associated performance plan and reports) to use in managing and assessing the progress of its foreign language program and to better address future human capital challenges and incorporate distance/distributive-learning approaches into the program to improve the special operations forces’ access to language training, and if additional resources are required, to request them. In addition, the Secretary of Defense should evaluate current (1) foreign language proficiency pay rates and (2) pay and allowance funding levels for Army Reserve and National Guard personnel to determine if changes are needed to provide them with a greater incentive to undertake language study and allow for more personnel to attend language schools and other training venues. Furthermore, the Secretary of Defense should examine options for increasing the use and availability of oral proficiency foreign language testing to provide additional opportunities for SOF personnel to test and qualify in their respective languages. In written comments on a draft of this report, DOD concurred with all but one of our recommendations. DOD’s comments are reprinted in appendix IV. DOD did not agree with our recommendation that the U.S. Special Operations Command adopt a strategy and develop strategic-planning tools to strengthen the management and delivery of foreign language training for special operations forces. DOD stated in its comments that the command’s current draft of a SOF language strategy is in its infancy and needs to be properly reviewed through various DOD organizations before the Secretary of Defense could direct its adoption. Although nothing in our draft report was meant to suggest that the draft language strategy should be implemented without proper review, we clarified this recommendation to state that the command adopt “a strategy,” rather than any particular draft of a strategy. While we recognize that it may take some time for the command to prepare and approve such a document, we would note that the command has a longstanding internal requirement, which dates to 1998, for the program to have such a strategy. In its comments, DOD did not address the second part of the recommendation, which called for the development, in tandem with a strategy, of strategic planning tools to use in managing and assessing the program’s progress and address future human capital challenges. We continue to believe that the timely adoption of both a strategy and planning tools is an essential step for ensuring the effective management of the SOF foreign language program. DOD concurred with our other recommendations, specifically that the command incorporate distributed learning approaches into its SOF foreign language training; that the Secretary of Defense evaluate the current foreign language proficiency pay rates and pay and allowance funding levels for Army Reserve and National Guard personnel; and that the Secretary examine options to increase the use and availability of oral proficiency testing. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Commander of the U.S. Special Operations Command; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me on (757) 552-8100. An additional GAO contact and other staff members who made key contributions to this report are listed in appendix V. In conducting our review, we focused on the foreign language training that the U.S. Special Operations Command (the command) and its service component commands in the Army, Navy, and Air Force provide for special operations forces (SOF) personnel. This training is offered to active-duty, reserve, and National Guard SOF personnel who have foreign language proficiency requirements. We discussed SOF language issues with a variety of officials at the Department of Defense (DOD), service headquarters offices, the command’s headquarters offices, Special Operations Forces Language Office (SOFLO) and service component commands, the Defense Language Institute, and other stakeholders that provide or use the command’s language training. The organizations and offices that we contacted during our review are listed in table 5. To assess the command’s recent actions to improve the management and delivery of its SOF foreign language training, we obtained documents and spoke with various stakeholders who use or support the training. In particular, we talked with officials at SOFLO about their responsibilities and the recent actions they have undertaken for the SOF language program. We reviewed DOD and command guidance, policies, speeches, reports, and other documents to increase our understanding of the program’s history and issues. We spoke with individuals in active-duty, reserve, and National Guard SOF units to learn their perspectives on obtaining language training and on achieving and retaining language proficiencies. Specifically, we did the following: We discussed the command’s new language services contract with command contracting officials and officials at each of the service components. We visited the contractor, B.I.B. Consultants, to discuss its use of teaching methodologies and management strategies to implement the contract. To obtain information about the first 11 months of language training (October 2002-August 2003) under the new contract, we (1) attended the command’s first quarterly contract reviews in March and August 2003; (2) discussed classes and other training activities with command and service components officials, B.I.B. Consultants and Berlitz International representatives, and language instructors and SOF students; and (3) conducted analyses of student end-of-course evaluations and proficiency results. We talked with command headquarters and SOFLO officials about the command’s progress in assessing the SOF language requirements and in changing the way it communicates and coordinates (e.g., via annual conference, Internet-based Web site, etc.) with its various stakeholders. We attended the command’s 2003 language conference. Although we reviewed the process for determining SOF language requirements, we did not examine the specific criteria and rationale for decisions made for those requirements (e.g., languages, number of personnel needed, and proficiency levels required for units) in its recent assessment. To determine the extent to which the SOF language program uses other national language training assets, we obtained information from and met with officials at the Defense Language Institute, Satellite Communications for Learning (SCOLA), Defense Applied Research Projects Agency, and Foreign Service Institute. We also attended a SCOLA language conference that focused on the use of its broadcasts to support government language programs. To understand the use and merits of strategic planning and how it could benefit the SOF language program, we reviewed our prior work on strategic planning and strategic human capital management and the general management principles laid out in the Government Performance and Results Act of 1993. In conducting our review of student end-of-course evaluations to determine the satisfaction of students with classes taught by B.I.B. under the new contract, we requested student evaluations from the Army’s John F. Kennedy Center and School for the first quarter of fiscal year 2003 and from the Naval Special Warfare Command’s Group 1 for the second quarter of fiscal year 2003. The Army’s school and the Navy’s Group 1 provided evaluations from 11 (out of 22) classes and 3 (out of 3) classes, respectively. An Army school official told us that the contractor could not provide the evaluations for the other 11 classes we requested because the evaluations had been misplaced. As a result, our evaluation results may not be fully representative of the views of all students in all classes because the missing evaluations may have different responses from those that did respond and were provided to GAO. In conducting our analysis, we selected three questions from the student end-of course evaluations that, in our judgment, provided an indication of the overall effectiveness of the course, the instructor’s performance, and the usefulness of course materials. We also reviewed individual student proficiency scores from 22 initial acquisition classes conducted at the Army’s school at Fort Bragg, North Carolina, to determine the performance of students in reaching end-of-course proficiency goals. In identifying ways for the command to deal with challenges that limit accessibility to its foreign language training resources, we interviewed officials at SOFLO and the service component commands to understand the training requirements and resources and determine the challenges faced by SOF personnel in gaining accessing language training. We examined information from SOFLO’s language database to assess the extent to which more frequent and longer deployments may affect SOF personnel’s access to the training they need to pass exams and qualify in their particular languages. We also talked with more than 50 members of Army Reserve and National Guard units to better understand their particular difficulties and limitations in getting training. We spoke with officials at the Defense Language Institute and visited their facilities to obtain information about their ongoing efforts to develop distance/distributed learning and advanced distributed-learning methods. We also met with Defense Applied Research Projects Agency officials to discuss how their new technologies could support SOF language-training needs. We performed our review from October 2002 through July 2003 in accordance with generally accepted government auditing standards. The special operations forces foreign language-training program uses the foreign language proficiency scale established by the federal Interagency Language Roundtable. The scale ranks individuals’ language skills in terms of their ability to listen, read, speak, and write in a foreign language. The scale has six basic proficiency levels, ranging from zero to 5; level zero indicates no language capability, and level 5 indicates proficiency in the language. A plus (+) designation is added if the proficiency substantially exceeds one skill level but does not fully meet the criteria for the next level. Table 6 shows the language capabilities required for each proficiency level. Language proficiency levels are established for SOF personnel during the U.S. Special Operations Command’s biennial assessment of language requirements, which is done in conjunction with geographic unified commanders. The assessment identifies the languages, the proficiency levels, and the number of individuals needed with these skills in the commanders’ geographic regions. Table 7 shows the required (minimum) and the desired proficiency levels for each service component and specialty. For example, Army SOF members who work in civil affairs and psychological operations where they frequently interact with local populations require a proficiency level of 2 for listening, reading, and speaking. Army Special Forces, on the other hand, require only a level 0+ to perform their missions, although a higher standard is desired. In accordance with its language services contract with the U.S. Special Operations Command, B.I.B. Consultants is providing various types of training for special operations forces personnel at each of the command’s service components. As shown in table 8, this training ranges from language instruction, to beginning students with no foreign language proficiency, to those students who have acquired some proficiency. It consists of language study conducted in a traditional classroom setting; one-on-one instructor/student training; and total immersion training, where students practice their language(s) in a live or virtual environment.The training also includes an orientation of the customs, culture, and common phrases for the area where the student’s language is used. During the first 9 months (October 2002 to July 2003) of the contract, B.I.B. training varied at each of the SOF service components. For example, from October 2002 to July 2003, B.I.B. conducted over 40 initial acquisition language classes for more than 500 students in 13 different languages at the Army’s John F. Kennedy Special Warfare Center and School at Fort Bragg, North Carolina. From January through February 2003, B.I.B. also provided initial acquisition language training for 10 students in three languages (3 classes) at the Navy’s Special Warfare Command’s Group 1 at Coronado, California. According to the Air Force command language program manager, B.I.B. is expected to start providing initial acquisition language training for Air Force SOF personnel at Hurlburt Field, Florida, where the Air Force recently established a language-training lab. According to a B.I.B. contract manager, B.I.B. has also provided 16 immersion sessions in various languages for students in each of the service components as of the end-of-July 2003 (9, 6, and 1, respectively, for the Navy, Army, and Air Force). According to a Special Operations Forces Language Office official, students’ proficiency scores after completing B.I.B.-taught classes at the Army’s school are about the same as those achieved under prior contracts. Additionally, six students in an accelerated pilot class achieved scores that met or exceeded the minimum proficiency level. Our review of students’ proficiency scores from all the initial acquisition classes (a total of 22), including the Spanish pilot course that began at the Army school during the first quarter of fiscal year 2003, showed that 6 percent (11 students) of the 171 students did not meet the 0+ requirement for listening and 2 percent (4 students) did not meet the 0+ requirement for reading. (See fig. 1.) However, all of those students did meet the alternate goal, which is to attain at least a 0+ on an Oral Proficiency Interview. Although only a small number of Navy SOF personnel have received training under the B.I.B. contract, a Naval Special Warfare Command Group 1 official said that students’ proficiency scores from the first three B.I.B. initial acquisition language classes (a total of 10 students) conducted from January through February 2003 exceeded the results of classes conducted under previous contracts. We analyzed student end-of-course evaluations for about half of the initial acquisition classes offered at the Army’s school during the first quarter of 2003. The evaluations were designed and administered by B.I.B. Students were asked to rate their satisfaction with (1) their progress, (2) the instructor, and (3) the usefulness of the materials. As table 9 shows, most students said they were extremely or very satisfied with their instructor’s performance. Most students also expressed some satisfaction with their progress and the usefulness of course materials. However, it should be noted that 13 out of 77 evaluations indicated dissatisfaction with their progress, and 17 out of 77 evaluations also indicated dissatisfaction with the usefulness of the course materials. At the Army school, the Army, as required under the B.I.B. contract, provides course materials. We also analyzed student end-of-course evaluations for three classes taught by B.I.B. at the Naval Special Warfare Command’s Group 1, Coronado, California, during the second quarter 2003. Unlike the Army, which used B.I.B.’s evaluation, the Navy designed and administered its own evaluation. In these evaluations, students were also asked to evaluate their courses in three areas: sufficient instruction time; instructor’s ability to effectively teach, and the quality of instructional material. As table 10 indicates, all responses rated the three areas as “excellent or good,” with the exception of the Indonesian class where two out of three students rated the “quality of materials” as “average.” Although only one of the three classes used B.I.B. course materials as required by the contract, classes that started in July 2003 are using the B.I.B.-provided materials. We did not review student evaluations at the U.S. Air Force Special Operations Command because no classes were completed during the time we conducted our work. In addition to the individual named above, Mark J. Wielgoszynski, Marie A. Mak, Corinna A. Wengryn, Nancy L. Benco, and Deborah Long made key contributions to this report.
Of the 44,000 special operations forces (SOF) that perform difficult, complex, and sensitive military missions on short notice anytime and anywhere in the world, more than 12,000 (28 percent) have a foreign language requirement to operate in places where English is not spoken. In the Senate Report on the Fiscal Year 2003 National Defense Authorization Act, Congress mandated that GAO review SOF foreign language requirements and training. In this report, we (1) assess the U.S. Special Operations Command's recent actions to improve the management of the SOF foreign language program and the delivery of training, and (2) identify ways for the command to deal with ongoing challenges that limit SOF personnel's access to language-training opportunities. Recent actions taken by the U.S. Special Operations Command are starting to address some long-standing problems with the management of the SOF foreign language program and the delivery of language training. In September 2002, the command consolidated all training under a single contractor to provide a universal, standardized curriculum and a range of delivery mechanisms for Army, Navy, and Air Force SOF components. Initial assessments suggest that the contractor's offerings are meeting contract expectations. In other actions, the program is completing an overdue assessment of SOF language requirements, developing a database of language proficiencies and training, and finding ways to take advantage of other national language-training assets. While promising, these ongoing actions are taking place without the benefit of a cohesive management framework incorporating a strategy and strategic planning to guide, integrate, and monitor its activities. Without such a framework, the program risks losing its current momentum and failing to meet new language-training needs that SOF personnel are likely to acquire as they take on expanded roles in combating terrorism and other military operations. The SOF foreign language program continues to face challenges, such as more frequent and longer deployments, that limit personnel's access to language training. Army Reserve and National Guard SOF members face additional difficulties in gaining access to centrally located training because of geographical dispersion and part-time status; they also have lower monetary incentives to acquire language proficiencies and fewer training opportunities. As a result, most SOF personnel have been unable to take needed training or required tests to qualify in their respective language(s). To address these challenges, program officials are looking into distance/distributive-learning approaches, which offer "anytime, anywhere" training that would be highly adaptable to SOF personnel needs, but they are still at an early stage in their evaluations.
In order to meet our mandate to conduct bimonthly reviews and prepare reports on selected states’ and localities’ use of funds, we have selected 16 states and the District of Columbia to track over the next few years to provide an ongoing longitudinal analysis of the use of funds under the Recovery Act. These states contain about 65 percent of the U.S. population and are estimated to receive about two-thirds of the intergovernmental grant funds available through the Recovery Act. In addition to reporting on the core group of 16 states, we will review the recipient reports from all 50 states. These recipient reports are to include information on funds received, the amount of Recovery funds obligated or expended to projects or activities, the projects or activities for which funds were obligated or expended, and the number of jobs created or preserved as a result of Recovery Act funds. The Recovery Act also included a number of specific mandates on which GAO must take action between April 2009 and February 2014. Our first bimonthly report, issued two weeks ago, covers the actions of selected states and localities under the Recovery Act as of April 20, 2009. About 90 percent of the $49 billion in Recovery Act funding being provided to states and localities in fiscal year 2009 will be through health, transportation, and education programs. (See app. I for federal programs that are receiving Recovery Act funding and are administered by states and localities.) Our first report focused particularly on Recovery Act funds for the three largest programs in these categories—Medicaid Federal Medical Assistance Percentage grant awards, highway infrastructure investment, and the Department of Education’s State Fiscal Stabilization Fund. We reported on the status of states’ activities related to these three programs. The report contains separate appendixes on each of the 16 states and the District of Columbia that discuss the plans and uses of funds in these three major programs as well as selected other programs that are receiving Recovery Act funds. The report also makes several recommendations to the Office of Management and Budget (OMB) directed toward improving accountability and transparency requirements; clarifying the Recovery Act funds that can be used to support state efforts to ensure accountability and oversight; and improving communications with Recovery Act funds recipients about when funds become available for their use and when federal guidance is modified or newly released. OMB concurred with the overall objectives of our recommendations and plans to work with us to further accountability for these funds. In consultation with the Congress in exercising our general statutory authority to evaluate the results of government programs and activities, we also will continue to target programs for additional review using a risk- based approach and will incorporate reviews of Recovery Act funding where practicable when we are examining base programs. There are many implementation challenges to ensuring adequate accountability and efficient and effective implementation of the Recovery Act. Experience tells us that the risk for fraud, waste, and abuse grows when billions of dollars are going out quickly, eligibility requirements are being established or changed, new programs are being created, or a mix of these characteristics. This suggests the need for a risk-based approach to target for early attention specific programs and funding structures based on known strengths, vulnerabilities, and weaknesses, such as a track record of improper payments or contracting problems. Of particular concern to this Subcommittee will be the extent to which Recovery Act R&D funding is effectively expended, and we discuss the initial implementation of R&D funding below. Regular and frequent GAO coordination with federal IGs, the Board, and state and local government auditors is a critical component of our work to ensure effective and efficient oversight. With several early coordination meetings, we laid the foundation for this ongoing coordination soon after the act was passed. First, I reached out to the IG community and, with Ms. Phyllis Fong, the Chair of the Council of Inspectors General on Integrity and Efficiency, hosted an internal coordination meeting on February 25, 2009, with Inspectors General or their representatives from 17 agencies. It was a very productive discussion in which we outlined coordination approaches going forward. In addition, soon after the President appointed him as Chair of the Board on February 23, 2009, I talked with Mr. Earl Devaney, former Inspector General at the Department of the Interior, to begin to coordinate such efforts as the audit of the U.S. government’s consolidated financial statements whereby GAO relies on the individual efforts of the IG’s financial audits of their departments and entities across the government. I am confident that we will coordinate our respective efforts well, both with the IG community and with the Board. We also reached out to the state and local audit community and participated in initial coordination conference calls. The first call, on February 26, 2009, included state auditors or their representatives from 46 states and the District of Columbia. The next day, we held a similar discussion with auditors from many localities across the country. State and local auditors perform very important oversight functions within their jurisdictions and have unique knowledge about their governments; we are continuing to coordinate with them closely as we carry out our responsibilities. It is also important for us to coordinate with OMB, especially in regard to the reporting requirements and other guidance to fund recipients and on what information is to be collected in order to adequately evaluate how well the Recovery Act achieves its objectives. We participate in weekly coordination conference calls with OMB, the Board, IGs, and state and local auditors. The impetus to schedule these calls was a letter OMB Director Peter Orszag and I received from the National Association of State Auditors, Comptrollers, and Treasurers; the National Association of State Budget Officers; the National Association of State Chief Information Officers; and the National Association of State Procurement Officials. This letter expressed their strong interest in coordinating reporting and compliance aspects of the Recovery Act. During these calls, we provide updates on our Recovery Act activities, and OMB provides updates on its actions. One important outcome of these calls thus far has been to call OMB’s and the Board’s attention to the need to clarify certain reporting requirements. For example, the Recovery Act requires federal agencies to make information publicly available on the numbers of jobs created and retained as a result of Recovery Act funded activities. Our work in the states yielded information that local level officials needed to define how to capture these data, and the state and local auditors were able to corroborate what we had heard. We included a recommendation to OMB in our first bimonthly report on the Recovery Act actions of selected states and localities to clarify this requirement, and OMB generally concurred with this recommendation. In addition to these regular calls, we are actively participating in discussions with state and local organizations to further foster coordination within the accountability community. These organizations include the National Association of State Auditors, Comptrollers, and Treasurers; the National Association of State Budget Officers; the National Association of State Procurement Officials; the National Association of State Chief Information Officers; the National Governors Association; the National Conference of State Legislatures; and the National League of Cities. For example, in March 2009, we participated—along with a state auditor, local auditor, and inspector general—in a webinar hosted by the National Association of State Auditors, Comptrollers, and Treasurers for its members. As Acting Comptroller General, I also serve as the Chairman of the National Intergovernmental Audit Forum (NIAF). The NIAF is an association that has existed for over 3 decades as a means for federal, state, and local audit executives to discuss issues of common interest and share best practices. NIAF’s upcoming May meeting will bring together these executives, including OMB, to update them on the Recovery Act and provide another opportunity to discuss emerging issues and challenges. In addition, a number of Intergovernmental Audit Forum meetings have been scheduled at the regional level that seek to do the same, and this regional coordination is directly contributing to our work in the states. For example, GAO’s western regional director recently made a presentation at the Pacific Northwest Audit Forum regarding GAO’s efforts to coordinate with state and local officials in conducting Recovery Act oversight. In conjunction with that forum and at other related forums, she has regularly participated in meetings, panel discussions, and break-out discussions with the principals of state and local audit entities to coordinate efforts to provide oversight of Recovery Act spending. The work of our 16 state teams that resulted in our first bimonthly report on the actions of selected states and localities under the Recovery Act also exemplifies the level of coordination we are undertaking with the accountability community. During the conduct of our work, we collected documents from and interviewed State Auditors, Controllers, and Treasurers; state Inspectors General; and other key audit community stakeholders to determine how they planned to conduct oversight of Recovery Act funds. We also coordinated as appropriate with legislative offices in the states concerning state legislatures’ involvement with decisions on the use of Recovery Act funds. In addition, we relied on reporting and data collected from the Federal Audit Clearinghouse, which operates on behalf of OMB to assist oversight agencies in obtaining audit information on states, local governments, and non-profit organizations. Illustrative examples follow: Our team working in Georgia coordinated closely with that state’s State Accounting Office, the State Auditor, and Inspector General among others, to understand their plans for mitigating risks and overseeing Recovery Act funding. For example, the Inspector General developed a database specifically to track Recovery Act complaints and a public service announcement to alert the public of how to report fraud, waste, and abuse. Our team working in North Carolina coordinated with the State Auditor regarding that state’s plans to ensure that Recovery Act funds are segregated from other federal funds coming through traditional funding streams to help ensure accountability and transparency. Our team working in New Jersey coordinated with the state’s new Recovery Accountability Task Force, which will review how state and local agencies spend Recovery Act funds as well as provide guidance and best practices on project selection and internal controls. As part of the Task Force, the state Comptroller has responsibility for coordinating all of the oversight agencies within the state. Our team working in California is coordinating with the state’s newly appointed Recovery Act Inspector General, who is seeking to make sure that Recovery Act funds are spent as intended and to identify instances of waste, fraud, and abuse. In addition, the team relied on the work of the State Auditor, whose most recent single audit identified numerous material weaknesses associated with programs included in GAO’s review. Provisions in GAO’s authorizing statute, the Whistleblower Protection Act, and the Recovery Act as well as a dedicated fraud reporting hotline facilitate our ability to evaluate allegations of waste, fraud and abuse in the federal government. Under our authorizing statute, we have authority to access information needed for the effective and efficient performance of our reviews and evaluations. Subject to certain limited exceptions, all agencies must provide the Comptroller General access to information he requires about the duties, powers, activities, organization, and financial transactions of that agency, including for the purpose of evaluating whistleblower complaints. Moreover, the Recovery Act applies certain federal whistleblower protections to the employees of recipients of Recovery funds. The Whistleblower Protection Act prohibits personnel actions taken against federal employees in reprisal for the disclosure of evidence of a violation of any law, rule, or regulation, gross mismanagement, a gross waste of funds, an abuse of authority, or a substantial and specific danger to public health or safety. Similarly, the Recovery Act prohibits reprisals against employees of nonfederal recipients of Recovery funds, but its protections only relate to disclosures regarding the use of Recovery funds. The Recovery Act provides employees of a nonfederal entity receiving a contract, grant, or other payment funded in whole or part by Recovery funds may not be discharged, demoted, or otherwise subject to discrimination as a reprisal for disclosing to the Board, an IG, the Comptroller General, the Congress, a state or federal regulatory or law enforcement agency, the employee’s supervisor, a court or grand jury, or a federal agency information about mismanagement, waste, danger to public health or safety, or a violation of law regarding the use of Recovery Act funds. People who believe they have been subject to reprisal may submit a complaint to the appropriate inspector general for investigation and seek redress through the courts. Table 1 outlines the coverage of Whistleblower Act and Recovery Act provisions. Section 902 of the Recovery Act gives us additional authority to examine the relevant records of contractors, subcontractors, or state or local agencies administering contracts that are awarded with Recovery Act funds. We may also interview officers and employees of such contractors or their subcontractors as well as officers or employees of any state or local agency administering such transactions. This additional authority could be applied to examining allegations made by whistleblowers. As part of our normal operations, we maintain a fraud reporting service. Anyone can report evidence of fraudulent activity to FraudNet through an automated answering system, a dedicated fax line, a dedicated email address, a dedicated mailing address, or an online form accessible from our Web site at www.gao.gov. Information about how to provide evidence of fraud is available on our web site at http://gao.gov/fraudnet.htm and on the last page of every GAO report. After the Recovery Act was passed, we coordinated with the IG community to publicize the use of FraudNet as a means to solicit public input and gather information on potential instances of waste, fraud, and abuse in the allocation and spending of Recovery Act funds. We also issued a press release on March 30, 2009, which was cited by the national news media in articles about the Recovery Act. Over the past few months, Fraudnet has received more than 25 allegations related to the misuse of Recovery Act, Troubled Asset Relief Program, or other related funds. These allegations are currently under review by GAO’s Forensic Audits and Special Investigations (FSI) unit, a specialized team with many years of experience conducting fraud investigations. FSI coordinates with the IG community as appropriate to ensure that there is no duplication of investigative efforts across the federal government. Further, in cases where GAO determines that another agency is better positioned to perform an investigation, FSI will refer relevant information to the appropriate agency. Although it is too soon to discuss details of the allegations we have received or the status of ongoing investigations, we will continue to work with our partners in the IG community, with the appropriate law enforcement agencies, and with the Congress, to ensure that all allegations are reviewed and investigated. On March 19, 2009, we testified before this Subcommittee on our role in helping to ensure accountability and transparency for Recovery Act science R&D funding. Our statement identified over $21 billion in related funding appropriated to DOE; the National Institute of Standards and Technology (NIST) and the National Oceanographic and Atmospheric Administration (NOAA) within the Department of Commerce; NSF; and NASA. As initial implementation of the Recovery Act unfolds, we are tracking these agencies’ activities to plan for science R&D expenditures. Table 2 provides information on the status of these agencies R&D-related Recovery Act funds, as of April 28, 2009. To collect this information, we worked with agencies’ officials and coordinated with agencies’ IGs. As implementation of the act progresses, further evaluations will continue to be coordinated with agencies’ IGs to prevent duplication and minimize any overlap in our work. As table 2 shows, the status of agencies’ R&D-related funding varies. Officials from each agency told us about the controls in place to ensure that their program plans are approved before funds are either apportioned by OMB or allotted by their agencies’ CFOs. For example, officials from each agency told us they are following OMB’s April 3, 2009, guidance for implementing the Recovery Act. OMB’s guidance requires that agencies’ submit program plans justifying Recovery Act expenditures that include a program’s objectives, funding, activities, types of financial awards to be used, schedule, environmental review compliance, performance measures, description of plans to ensure accountability and transparency, and a plan for monitoring and evaluation. In addition, this guidance requires that agencies submit the program plans to OMB for approval by May 1, 2009, and states that OMB will approve these program plans by May 15, 2009. Officials from NIST, NOAA, and NSF told us that their agencies’ CFOs will not allot funds for obligation until the House Appropriations Subcommittee on Commerce, Justice, and Science has reviewed their program plans. DOE CFO officials told us that the CFO will allot apportioned funds after an internal DOE approval process, even if OMB has not yet approved program plans; however, officials said DOE programs cannot obligate funds until OMB program plan approval is complete. As of April 28, 2009, only DOE’s Office of Science had obligated any funds for R&D project expenditures. These obligations, totaling $342 million will support various construction, facilities disposition, and general plant projects at national laboratories, as well as procurement and installation of experimental equipment and instrumentation. (See app. II for additional details on each agency’s planned uses of funds.) Related to the efforts of the four federal agencies to obligate the R&D funds, our April 29, 2009, report discussed our initial observations on improving grant submission policies that could help minimize disruptions to the grants application process during the Recovery Act’s peak filing period. Our report was requested in response to two OMB memoranda to federal agencies stating that the existing Grants.gov infrastructure would not be able to handle the influx of applications expected as key Recovery Act deadlines approached. We found that at least 10 agencies will accept some or all applications outside of Grants.gov during the Recovery Act’s peak filing period. For example, NSF and NASA are only accepting applications through their own existing electronic systems for some grants. We recommended that the Director of OMB take actions to increase the likelihood that applicants can successfully apply for grants during the Recovery Act’s peak application filing period. Specifically, we recommended that OMB (1) ensure that an announcement discussing agency alternate submission methods similar to that recently posted on Grants.gov is posted in a prominent location on Recovery.gov and on all federal Web sites or in all documents where instructions for applying to Recovery Act grants are presented and (2) prominently post certain government policies for all grant applications submitted during the peak filing period for Recovery Act grants, notifying applicants that, among other things, if an application was deemed late they are notified of such an outcome and are provided an opportunity to provide supporting documentation demonstrating they attempted to submit the application on time. OMB generally concurred with these recommendations. In addition to direct expenditures, the Recovery Act also includes tax provisions that benefit individuals and businesses. The Internal Revenue Service (IRS) recently published a fact sheet on 12 different tax credits available under the Recovery Act for various energy efficiency measures taken by homeowners and businesses as well as for qualified renewable energy producers. Some of these credits are new, and others are modifications of existing tax credits previously included in the tax code. As I testified in March 2009, one particular area that needs additional early attention is identifying the data to be collected concerning the use and results of the Recovery Act’s various tax provisions. Accountability and transparency are perhaps easier to envision for the outlay portions of the stimulus package because the billions of dollars in tax provisions in the Recovery Act are considerably different than outlay programs in their implementation, privacy protections, and oversight. Most tax benefits are entirely administered by the Internal Revenue Service (IRS), and all taxpayer information, including the identity of those using the benefits, is protected by law from disclosure. Further, unlike most outlay programs, IRS does not know who makes use of the tax benefit until after the fact, if then. While IRS previously collected information that may have been sufficient to evaluate the benefits of energy tax credits, IRS has not yet announced what information it will collect for the credits as revised or added by the Recovery Act. In closing, I want to underscore that we welcome the responsibility that the Congress has placed on us to assist in the oversight, accountability, and transparency of the Recovery Act. We will continue to coordinate closely with the rest of the accountability community and honor our ongoing commitment to promptly address information provided by whistleblowers. We are committed to completing our Recovery Act work on the timetable envisioned by the act and will keep the Congress fully informed as our plans evolve. Mr. Chairman, Representative Broun, and Members of the Subcommittee this concludes my statement. I would be pleased to respond to any questions you may have. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Patricia Dalton, Managing Director, Natural Resources and Environment (202) 512-3841 or [email protected]. Key contributors to this testimony were Richard Cheston (Assistant Director), Divya Bali, Allison Bawden, Karen Keegan, Michelle Munn, and Barbara Timmerman. To update information on Recovery Act funding for R&D-related activities, we met with and interviewed Department of Energy (DOE), National Institute of Standards and Technology (NIST), National Oceanographic and Atmospheric Administration (NOAA), National Science Foundation (NSF), and National Aeronautics and Space Administration (NASA) officials, and analyzed documentation they provided. We also reviewed publicly available information provided by the Office of Management and Budget (OMB), through the recovery.gov Web site, and agencies’ own recovery Web sites. Finally, we coordinated with each agency’s Inspector General (IG) to discuss the data we collected. We conducted this work in accordance with generally accepted government auditing standards. DOE’s program offices vary in the extent to which they have funds available to obligate for expenditure. A little more than 40 percent of DOE’s R&D-related Recovery Act funding has been apportioned by OMB, and only DOE’s Office of Science has obligated R&D project funds. OMB has not apportioned any funds to DOE’s Office of Fossil Energy and has only apportioned minimal funds to its Loan Guarantee Program. Energy Efficiency and Renewable Energy (EERE). The Recovery Act appropriated $2.5 billion to EERE for R&D activities related to alternative and renewable energy sources, such as biomass and geothermal. An additional $2.4 billion was appropriated for advanced transportation research focused on next-generation plug-in hybrid electric vehicles, their advanced battery components, and transportation electrification. OMB has apportioned all of EERE’s appropriation, and DOE’s Office of the Chief Financial Officer (CFO) has generally allotted the funds to support the R&D activities associated with vehicle technologies and electrification. EERE has issued a solicitation for grants, which closes May 13, 2009, to establish development, demonstration, evaluation, and education projects to accelerate the market introduction and penetration of advanced electric drive vehicles. In addition, EERE has issued a solicitation for grant proposals supporting the construction of U.S.-based manufacturing plants to produce batteries and electric drive components, which closes May 19, 2009. Fossil Energy (FE). The Recovery Act appropriated $3.4 billion to FE for R&D-related activities, including funds to support a third round of competition under the Clean Coal Power Initiative; fossil energy R&D programs, such as fuel and power systems research or FutureGen; and competitive grants for carbon capture and energy efficiency improvement projects. As of April 28, 2009, OMB had not apportioned any of these funds to DOE, and thus no funds have been allotted, obligated, or expended. According to an FE official, OMB is unlikely to apportion funds to FE until after May 15, 2009, when its program plans are expected to be approved. Science. The Recovery Act included a $1.6 billion appropriation for DOE’s Office of Science (Science). Nearly all $1.6 billion appropriated has been apportioned by OMB to DOE without restriction, and the Secretary of Energy has announced priorities for $1.2 billion of these funds, including: $248 million for major construction, modernization, infrastructure improvements, and needed decommissioning of facilities at national laboratories; $330 million for operations and equipment procurement and installation at major scientific user facilities; $277 million for competitive research collaborations on transformational basic science needed to develop alternative energy sources; $90 million for core scientific research grants to be awarded to graduate students, postdocs, and Ph.D. scientists across the nation for applications of nuclear science and technology, and for alternative isotope production techniques; and $215 million to accelerate construction of two experimental facilities. Science has obligated $342 million to support various approved construction, infrastructure improvement, and facility decommissioning projects at national laboratories, as well as procurement and installation of experimental equipment and instrumentation. Table 3 describes Science’s Recovery Act projects at its national laboratories, including those for which funding has already been obligated. Advanced Research Projects Agency–Energy (ARPA-E). The Congress authorized the establishment of ARPA-E within DOE in August 2007. ARPA-E supports transformational energy technology research projects with the goal of enhancing the nation’s economic and energy security. ARPA-E received its first appropriation of $400 million in the Recovery Act, which was soon followed by an additional $15 million in the Omnibus Appropriations Act, 2009. According to a DOE official, the Secretary of Energy signed a memorandum formally creating the new office on April 22, 2009, and designated an Acting Deputy Director until a presidential appointee is confirmed by the Senate. As of April 28, 2009, DOE’s CFO had allotted $2 million in program direction funds to ARPA-E to hire employees, set up office space, and support requirements necessary to implement the provisions of the Recovery Act. In addition, ARPA-E issued its first competitive solicitation on April 27, 2009, to fund up to $150 million of high-risk, high-potential projects focused on innovative energy technologies. Project proposals are due June 2, 2009, and awards are generally expected to range from $2 million to $5 million. According to a DOE official, ARPA-E anticipates issuing more targeted solicitations associated with the remaining Recovery Act funds; however, the official said these solicitations are not likely to be issued until a Senate confirmed appointee is in place to lead the organization. Loan Guarantee Program (LGP). The Recovery Act included appropriations totaling $6 billion to LGP, which could support $60 billion in new loan guarantees, depending on the credit subsidy rate. LGP officials told us the program plans that they submitted to OMB on May 1, 2009, support new loan guarantees for renewable energy systems, electric power transmission systems, and leading-edge biofuel projects performing at the pilot or demonstration stage and that the Secretary of Energy determines are likely to become commercial technologies. In addition, the Secretary of Energy has announced a number of restructuring initiatives for the program, which, as we reported in July 2008, faces a number of challenges. Officials have indicated that 6 of the 11 applicants who responded to DOE’s August 2006 solicitation for various types of innovative technology loan guarantees could be eligible for loan guarantees under Recovery Act terms. We are currently examining the status of LGP’s efforts to solicit and review loan guarantee applications, including its efforts to use Recovery Act funds, and its progress in implementing the recommendations in our July 2008 report. As of April 28, 2009, OMB had apportioned all $1.41 billion directly appropriated to NIST and NOAA for Recovery Act R&D-related activities. According to agency officials, funds have not yet been made available for obligation pending OMB and Congressional approval of program plans. NIST. NIST plans to spend the $580 million it was directly appropriated to support, competitive research grants, fellowships, and procurement of advanced research and measurement equipment and supplies. These funds are also planned to support a construction grant program for research science buildings, construction of new NIST facilities, and the reduction of the backlog of deferred maintenance for existing NIST facilities. In addition, NIST will receive $10 million appropriated to DOE under the Recovery Act for work on the electricity grid and $20 million appropriated to the Department of Health and Human Services to create and test standards related to health security. According to one official, NIST is working with OMB to prepare solicitations and other grant-related documents, so the agency can quickly issue Recovery Act grant solicitations once its program plans are approved. NOAA. The Recovery Act appropriated $830 million to NOAA for construction and procurement related to R&D-related activities, including support for research operations and facilities; construction and repair of facilities, ships and equipment; and research to address gaps in climate modeling and to establish climate data records for research into the cause, effects, and ways to mitigate climate change. NOAA has issued a competitive solicitation for up to $170 million in grants for shovel-ready projects to restore marine and coastal habitats. Applications were due on April 6, 2009. A NOAA official told us that NOAA is working with OMB to draft solicitations and other contract-related documents so the agency can quickly issue Recovery Act contract solicitations once its program plans are approved. The Recovery Act appropriated $3 billion to NSF for R&D-related activities, including competitive research grants; major research instrumentation and equipment procurement and facilities construction; academic research facilities modernization; and education and human resources. NSF officials believe their Recovery Act funds can be obligated quickly once program plans are approved because, for example, $2 billion of the $3 billion will fund proposals that NSF’s independent expert review panels have already deemed of merit but that NSF was not previously able to fund. Specifically, NSF officials have stated that these grants will be awarded by September 30, 2009, and NSF expects its Recovery Act funds will allow the agency to support an additional 50,000 investigators, post- doctoral fellows, graduate and undergraduate students, and teachers throughout the nation. The Recovery Act appropriated $1 billion to NASA for expenditures on space exploration; earth science and climate research missions; adding supercomputing capacity; aeronautics activities, including aviation safety research, environmental impact mitigation, and activities supporting the Next Generation Air Transportation System; and restoration of facilities at the Johnson Space Center in Houston, Texas, damaged during Hurricane Ike in 2008. $50 million to support restoration work at the Johnson Space Center has been apportioned by OMB, and NASA has begun to issue requests for proposals for this restoration work. According to a NASA official, OMB has agreed with NASA on the funding priorities for the remaining $950 million appropriated, and funds will apportioned once OMB approves NASA’s program plans. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses GAO's efforts to coordinate with the accountability community--the Recovery Accountability and Transparency Board (the Board), the Inspectors General (IGs), and state and local government auditors--to help ensure effective and efficient oversight of American Recovery and Reinvestment Act (Recovery Act) funds. The Recovery Act assigns GAO a range of responsibilities including bimonthly reviews of the use of funds by selected states and localities. Because funding streams will flow from federal agencies to the states and localities, it is important for us to coordinate with the accountability community. Also, on March 19, 2009, GAO testified before this Subcommittee about the more than $21 billion in Recovery Act funds estimated to be spent for research and development (R&D) activities at four federal agencies. This statement discusses (1) GAO's efforts to fulfill its responsibilities under the Recovery Act; (2) GAO's coordination with others in the accountability community; (3) GAO's authorities to assist whistleblowers and elicit public concerns; and (4) updated information on the status of Recovery Act funds for R&D. It is based in part on GAO's first bimonthly Recovery Act report, Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential (GAO-09-580), and GAO's March 5, 2009 testimony, American Recovery and Reinvestment Act: GAO's Role in Helping to Ensure Accountability and Transparency (GAO-09-453T). GAO is carrying out its responsibilities to review the uses of Recovery Act funds and will also target certain areas for additional review using a risk-based approach. GAO's first bimonthly report examined the steps 16 states, the District of Columbia, and selected localities are taking to use and oversee Recovery Act funds. These states contain about 65 percent of the U.S. population and are estimated to receive about two-thirds of the intergovernmental grant funds available through the Recovery Act. GAO's report made several recommendations to the Office of Management and Budget (OMB) toward improving accountability and transparency requirements; clarifying the Recovery Act funds that can be used to support state efforts to ensure accountability and oversight; and improving communications with Recovery Act funds recipients. Soon after the Recovery Act passed, GAO began to coordinate with the accountability community. By the end of February 2009, GAO conducted initial outreach to IGs, the Board, OMB, and state and local auditors. Now, GAO participates in regular coordination conference calls with representatives of these constituencies to discuss Recovery Act efforts and regularly coordinates with individual IGs. GAO also participates in discussions with state and local organizations to further foster coordination. The work of GAO's 16 state and District of Columbia teams that resulted in the first bimonthly report on the actions of selected states and localities under the Recovery Act also exemplifies the level of coordination we are undertaking with the accountability community. For example, teams working in the states collected documents from and interviewed State Auditors, Controllers, and Treasurers; state IGs; and other key audit community stakeholders to determine how they planned to conduct oversight of Recovery Act funds. Provisions in statute as well as a fraud reporting hotline facilitate GAO's ability to evaluate allegations of waste, fraud, and abuse in the federal government. Under GAO's authorizing statute, subject to certain limited exceptions, all agencies must provide the Comptroller General with access to information about the duties, powers, activities, organization and financial transactions of that agency, including for the purpose of evaluating whistleblower complaints. The Whistleblower Protection Act and the Recovery Act provide additional authority for GAO to assist whistleblowers. GAO also maintains a fraud reporting service, which has recently generated more than 25 allegations of misuse of Recovery and other federal funds. These allegations are currently under review by our forensic audit team. Since GAO first provided this Subcommittee with an estimate of the Recovery Act R&D funds to be spent, agencies have submitted program plans to OMB that include, among other things, programs' objectives, schedules, and the types of financial awards to be used. OMB expects to approve these plans by May 15, 2009. As of April 28, 2009, only the Department of Energy's Office of Science had obligated Recovery Act R&D funds for project expenditures.
While the 75 percent rule has been in effect in one form or another for over two decades, the current payment system and review procedures for IRFs went into effect in recent years. The Social Security Amendments of 1983 changed the Medicare hospital payment system from a cost-based retrospective reimbursement system to a prospective system known as the inpatient prospective payment system (IPPS), under which hospitals receive a per discharge payment for a diagnosis-related group (DRG). However, the amendments excluded “rehabilitation hospitals,” and so IRFs continued to be paid under a reasonable-cost-based retrospective system. Before the IPPS was implemented, CMS consulted with the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and other accrediting organizations to determine how to classify IRFs, that is, distinguish them from other facilities for payment purposes. The 75 percent rule was established for that purpose in 1983. To develop the original list of conditions in the 75 percent rule, CMS relied, in part, on information from the American Academy of Physical Medicine and Rehabilitation, the American Congress of Rehabilitation Medicine, the National Association of Rehabilitation Facilities, and the American Hospital Association. According to CMS, the conditions on the list accounted for approximately 75 percent of the admissions to IRFs when the original list was developed. In January 2002 a prospective payment system (PPS) was implemented for IRFs—known as the inpatient rehabilitation facility prospective payment system (IRF PPS). On June 7, 2002, CMS suspended the enforcement of the 75 percent rule after its study of FIs, which have responsibility under contract with CMS for verifying compliance with the rule, revealed that they were using inconsistent methods to determine whether an IRF was in compliance and that in some cases IRFs were not being reviewed for compliance at all. Specifically, CMS found that only 20 of the 29 FIs conducted reviews for IRF compliance with the 75 percent rule and that the FIs that did these reviews used different methods and data sources. In 2004, CMS standardized the verification process that the FIs were to use to determine if an IRF met the classification criteria, including how to determine whether a patient is considered to have 1 of the 13 conditions. When the final rule was made effective on July 1, 2004, a transition period was established for IRFs to meet the requirements of the rule. In addition to lowering and then increasing the threshold, the transition period allows a patient to be counted toward the required threshold if the patient is admitted for either a primary or comorbid condition on the list in the rule. But at the end of the transition period, a patient cannot be counted toward the required threshold on the basis of a comorbidity on the list in the rule. The requirements of the transition period are as follows: July 1, 2004, to June 30, 2005: 50 percent threshold, counting comorbidities July 1, 2005, to June 30, 2006: 60 percent threshold, counting comorbidities July 1, 2006, to June 30, 2007: 65 percent threshold, counting comorbidities Effective July 1, 2007, the threshold will be 75 percent, not counting comorbidities. During the 3-year transition period, CMS plans to analyze claims and patient assessment data to evaluate if and how the 75 percent threshold should be modified. In addition, the agency has announced its willingness to consider alternative policy proposals to the 75 percent rule submitted during this period. In the past, CMS has declined requests to modify the rule’s threshold or list of conditions, citing a lack of supporting or objective data from the clinical community. However, in the final rule, the agency solicited “objective data or evidence from well-designed research studies” that would support a change in the rule’s 75 percent threshold or list of conditions. Also, because of the relative absence of clinical research studies in the peer-reviewed medical literature, CMS contracted with NIH to convene one meeting of a research panel to review the current medical literature and identify priorities for conducting studies on inpatient rehabilitation. Beginning in January 2002, CMS implemented the IRF PPS to pay IRFs on a per-discharge basis. Payment is contingent on an IRF’s completing a patient assessment after admission and transmitting the resulting data to CMS. The Inpatient Rehabilitation Facility—Patient Assessment Instrument (IRF-PAI) includes identification of an impairment group code that identifies the impairment group, or the condition that requires admission to rehabilitation. The patient’s comorbidities are also recorded on the IRF-PAI. The impairment group code is combined with other information on the IRF-PAI to classify the patient into 1 of 100 case-mix groups (CMG). Patients are assigned to a CMG based on the impairment group code, age, and levels of functional and cognitive impairment. The CMG determines the payment the IRF will receive for a patient. Each CMG is weighted to account for the relative difference in resource use across all CMGs. Within each CMG, the weighting factors are “tiered” based on the estimated effect of comorbidities. Each CMG has four payment tiers reflecting the level of comorbidities. CMS contracts with FIs, the entities that conduct compliance reviews, to conduct reviews for medical necessity to determine whether an individual admission to an IRF was covered under Medicare. FIs were specifically authorized to conduct reviews for medical necessity for inpatient rehabilitation services beginning in April 2002. According to the Medicare Benefit Policy Manual, two basic requirements must be met if inpatient hospital stays for rehabilitation services are to be covered: (1) the services must be reasonable and necessary, and (2) it must be reasonable and necessary to furnish the care on an inpatient hospital basis, rather than in a less intensive facility, such as a SNF, or on an outpatient basis. Determinations of whether hospital stays for rehabilitation services are reasonable and necessary must be based on an assessment of each beneficiary’s individual care needs. Fewer than half of all IRF Medicare patients in fiscal year 2003 were admitted for conditions on the list in the 75 percent rule. The patients admitted in 2003 had a variety of conditions, not all of which were on the list in the rule. Nearly half of the patients admitted for conditions not on the list were admitted for orthopedic conditions. The largest group of patients admitted for orthopedic conditions not on the list were admitted for joint replacements that did not meet the list’s specific criteria for joint replacement. Relatively few of these patients had comorbid conditions that suggested a possible need for the intensive level of rehabilitation provided in IRFs. Additionally, we found that based on the fiscal year 2003 data few IRFs were able to meet a 75 percent threshold. Medicare patients were admitted to IRFs in fiscal year 2003 with a variety of conditions, as defined by the impairment group codes we analyzed. Forty-two percent of the 506,662 Medicare patients admitted to IRFs in 2003 were admitted with orthopedic conditions, representing the largest category of patients. Figure 1 shows the distribution of all the conditions, based on impairment group codes, for which patients were admitted to IRFs in fiscal year 2003. The largest impairment group consisted of patients admitted for joint replacement. Fewer than half of the Medicare patients (222,316 of the 506,662 patients) admitted in fiscal year 2003 were admitted for a primary condition that was on the list in the 75 percent rule. Using the impairment group codes assigned to these patients at the time of their admission, we determined that in fiscal year 2003 less than 44 percent of IRF admissions had a primary condition that was on the list in the rule. However, when comorbid conditions that were on the list were counted—as they would be during the transition period—the number of patients having a listed condition rose to 311,740 (62 percent) of IRF patients in that year. (See table 1.) The amount of increase that occurred when comorbid conditions were counted varied by impairment group. For some impairment groups, the percentage of patients who had a condition on the list in the rule substantially increased when comorbidities were counted. For example, the percentage of joint replacement patients having a listed condition increased from 13 percent to 51 percent by virtue of their comorbidities. The comorbidity that qualified over 90 percent of this group was some form of arthritis. In contrast, the increase was lower for patients in the medically complex, cardiac, debility, pain syndrome, and pulmonary disorder impairment groups, increasing between 14 percentage points and 22 percentage points. The comorbidity that qualified about one-third of cardiac and debility patients was stroke, and the comorbidity that qualified over one-third of pulmonary patients was a neurological condition. Almost half of the 194,922 IRF Medicare patients that did not have a condition on the list in the rule, either as a primary condition or as a comorbid condition, were admitted for orthopedic conditions. (See fig. 2.) The single largest group of patients who did not have a condition on the list were the joint replacement patients whose condition did not meet the list’s specific criteria for joint replacements. Over 30 percent of patients who did not have a condition on the list had been admitted to IRFs for joint replacement, with another 15 percent having been admitted for “other orthopedic,” that is, any orthopedic condition other than hip fractures or joint replacements. The next largest group, cardiac patients, represented 12 percent. Although some joint replacement patients may need the level of services of an IRF, such as those who have a comorbid condition that significantly affects their level of function, our analysis of the case-mix groups used for payment purposes suggests that relatively few of the Medicare joint replacement patients currently admitted by IRFs fit this description. In particular, 87 percent of joint replacement patients admitted in fiscal year 2003 had unilateral procedures and were less than 85 years of age, and thus did not fit the criteria for joint replacement on the list in the rule based on their primary condition. Of the joint replacement patients who did not fit the criteria based on their primary condition, over 84 percent were in a payment tier with no comorbidities that affected costs. Only 6 percent of IRFs were able to meet the requirements of full implementation of the rule that would be in place at the end of the transition period, that is, a 75 percent threshold not counting comorbidities. Our analysis of fiscal year 2003 data for Medicare patients admitted to IRFs, which used the current list of 13 conditions, showed that as the threshold level increased from 50 percent to 75 percent and both primary and comorbid conditions were counted, progressively fewer IRFs were able to meet the higher threshold levels. (See table 2.) In addition, when the count was based only on whether the patient’s primary condition was on the list in the rule, as it would be after the transition period, even fewer IRFs met the requirements of the rule. However, many IRFs were able to meet the lower thresholds that would be in place earlier in the transition period. Over 80 percent of IRFs were able to meet a 50 percent threshold based on the primary conditions or comorbid conditions of the patients they admitted in 2003. Some IRF officials are concerned that they may have to limit admissions in order to comply with the rule and that some IRFs may have to close or reduce beds. Some of the IRF officials we interviewed reported that as the threshold of the rule increases they expect to limit admissions for patients with conditions not on the list in the rule. One IRF official estimated that the facility’s revenues would decrease by 40 percent by the third year of the rule’s transition period, severely harming the facility financially and affecting access to care, and another IRF official reported that the facility expected its census to drop by half, which would affect the number of beds it could operate and staff it could employ. An IRF official whose facility was meeting the 75 percent threshold said that if the facility fell below the threshold, it would limit admissions to remain in compliance. IRFs have not generally been declassified based on the failure to comply with the 75 percent rule, and CMS recently clarified instructions for FIs to use to conduct compliance assessments. Officials from CMS’s 10 regional offices reported that no IRFs had been declassified in at least the past 5 years. When CMS found that FIs were using different approaches to conduct compliance assessments, it determined that one cause was that the CMS manuals did not detail the methodology FIs should use to perform the reviews. Following CMS’s modifications of the rule, it issued new instructions in a program transmittal that defined and standardized the procedures that FIs are to use to conduct compliance assessments, and some FI officials we interviewed reported that instructions were clearer and more detailed than in prior years. The criteria IRFs used to assess patients for admission varied by facility, and CMS has not routinely reviewed IRFs’ admission decisions. In particular, IRFs used a range of criteria in making admission decisions, including patient characteristics such as function, in addition to condition. Admission decisions may also be influenced by an IRF’s level of compliance with the 75 percent rule’s list of conditions. CMS, working through its FIs, has not routinely reviewed IRF admission decisions for medical necessity, although the CMS officials reported that such reviews could be used as a means to target problems. The IRF officials we interviewed varied in the criteria they used to characterize the patients that were appropriate for admission. (See table 3.) The number of criteria they reported using ranged from two to six, with no IRF reporting that it relied on a single criterion for admission. Whereas some IRF officials reported that they used function to characterize patients who were appropriate for admission (e.g., patients with a potential for functional improvement), as shown in table 3, others said they used function to characterize patients not appropriate for admission (e.g., patients whose functional level was too high, indicating that they could go home, or too low, indicating that they needed to be in a SNF). In combination, all the IRF officials we interviewed evaluated a patient’s function when assessing whether a patient needed the level of services of an IRF, and almost half of the IRF officials interviewed stated that function was the main factor that should be considered in assessing the need for IRF services. The IRF officials we interviewed reported that they did not admit all the patients they assessed. They estimated that the proportion of patients they assessed but did not admit ranged from 5 percent to 58 percent. Most patients were admitted to IRFs from an acute care hospital, and the IRF officials reported receiving referrals from as few as 1 hospital to as many as 55 hospitals. The IRF typically received a request from a physician in the acute care hospital requesting a medical consultation from an IRF physician, or from a hospital discharge planner or social worker indicating that they had a potential patient. An IRF staff member—usually a physician and/or a nurse—conducted an assessment prior to admission to determine whether to admit a patient. In addition to individual patient characteristics, admission decisions may also be influenced by an IRF’s level of compliance with the 75 percent rule’s list of conditions. All the IRF officials we interviewed were able to track their own facility’s compliance level regularly, and said they tracked it generally on a daily, weekly, or monthly basis. Some IRF officials we interviewed reported that the admission decision for a given patient may be affected by the IRF’s compliance level at that time. For example, on a day when the facility is at the required level of compliance a patient with a certain condition that is not on the list in the rule may be admitted, but on another day when the facility is below its compliance level a patient with the same condition might not be admitted. Half of the IRF officials said that when the rule is enforced they expect they will try to admit more patients with conditions on the list in the rule. CMS, working through its FIs, has not routinely reviewed IRF admission decisions for medical necessity. Among the 10 FI officials we interviewed, over half were not conducting reviews of patients admitted to IRFs. Those that were doing reviews used different approaches for selecting records to be reviewed, such as focusing only on the largest IRFs that failed to comply with the rule or requesting a few records from each IRF in its service area. CMS officials estimated that less than 1 percent of admissions in facilities excluded from IPPS, such as IRFs, are reviewed, and reported that such reviews could be used as a means to target problems or vulnerabilities. Among the experts IOM convened and other experts we interviewed, it was stated that because there has been no routine review for medical necessity in IRFs, some IRFs have become “sloppy” in their admitting practices and have taken a “laissez-faire attitude” toward admitting patients. This perspective is borne out through ad hoc studies done by three FIs that found inadequate justification for admission. For example, in one study an FI official reviewed about 3,000 medical charts and reported that the need for inpatient rehabilitation was unclear in about 30 percent to 40 percent of the IRF patients’ charts reviewed. The other two FIs reviewed fewer cases, but found a higher proportion of patients in IRFs who did not appear to need inpatient rehabilitation. In contrast to CMS’s approach, private payers rely on individual preauthorization to ensure that the most appropriate patients are admitted to IRFs. Of the three major insurers and one managed care plan whose officials we interviewed, all required preauthorization for each admission to an IRF when determining whether a specific patient should be admitted, judging each case individually. In making their decisions, they relied on a variety of factors, which differed from payer to payer, including diagnosis, symptoms, treatment plan, the need for and the patient’s ability to participate in 3 hours of daily therapy, the need for care by a physiatrist, and the potential for an IRF admission to provide an earlier discharge from the acute care hospital (compared to a possibly longer stay in the acute care hospital with discharge to home or a SNF). Three private payers we spoke with indicated that IRFs are generally paid on a per diem basis, and all said that patients are monitored by the insurer or health plan throughout their IRF stay. The experts IOM convened and other experts we interviewed differed on whether conditions should be added to the list in the 75 percent rule but agreed that condition alone does not provide sufficient criteria to identify the types of patients appropriate for IRFs. The experts IOM convened questioned the strength of the evidence for adding conditions to the list. They reported that the evidence on the benefits of IRF services— particularly for certain orthopedic conditions—is variable, and they called for further research. Other experts did not agree on whether conditions, including a broader category of joint replacements, should be added to the list. The experts IOM convened and other experts agreed that condition alone is insufficient for identifying appropriate patients and contended that functional status should also be considered. The experts IOM convened suggested factors to use in classifying IRFs, including both patient and facility characteristics. The experts IOM convened generally questioned the strength of the evidence for the conditions suggested for addition to the list in the rule. Some of them reported that there was little information available on the need for inpatient rehabilitation for cardiac, transplant, pulmonary, or oncology patients. One of them stated that inpatient rehabilitation may be the best way of caring for patients who have weakened physically due to long hospital stays but added that “we simply do not know.” The same expert also cited a study that showed that inpatient rehabilitation services made a difference for patients with metastatic spine cancer and noted that this result was unexpected and could indicate that “clinical intuition” on the benefits of inpatient rehabilitation may not always be reliable. For conditions currently on the list in the rule, the experts IOM convened reported varying degrees of strength in the evidence on the benefits of IRF services. Although the experts IOM convened did not comment on every condition on the list, the group generally agreed that the data on the benefits of intensive inpatient rehabilitation for stroke are “incontrovertible.” For certain other conditions on the list, such as spinal cord injury and traumatic brain injury, they reported that it is reasonable to expect intensive inpatient rehabilitation to provide good outcomes because these patients need intensive training about self-care and patients with traumatic brain injury may also require behavioral services. One expert questioned the strength of the evidence related to hip fractures, saying it was unclear whether patients with a hip fracture would be better served by sending them home right away, by putting them in an IRF, or by giving them some combination of intensive inpatient rehabilitation, home health care, or care in a SNF. The condition the experts IOM convened discussed most was joint replacement, which was the most common condition for patients admitted to IRFs and is included on the list of conditions in the rule but only under certain circumstances. In general, they reported that, except for a few subpopulations, uncomplicated unilateral joint replacement patients rarely need to be admitted to an IRF. For example, one of the experts said that admission to an IRF of a healthy person with an uncomplicated joint replacement is an example of a practice that is not evidence-based, and others said that there are no data and little evidence on the effectiveness of intensive inpatient rehabilitation for elective joint replacement patients. Another expert stated that the evidence on the benefits of IRF services for hip fracture and joint replacement patients is “very, very weak,” that orthopedics is the “heart of the issue” related to the list of conditions in the rule, and that a panel of clinicians should be convened to focus solely on the orthopedic conditions. Most of the experts IOM convened called for more research in several areas, including which types of patients can be treated best in IRFs and the effectiveness of IRFs in comparison with other settings of care. CMS has also identified questions for a future research agenda that can assess the efficacy of rehabilitation services in various settings. CMS may also undertake other activities, such as periodically holding additional meetings with researchers or encouraging observational studies, as well as soliciting comments from the public for additional studies. There was no general agreement among the IRF officials we consulted on whether conditions should be added to the list in the rule, and if so, which conditions. In our interviews with IRF officials, three-quarters identified various conditions that should be added. Of these, all suggested the addition of cardiac conditions, and some identified other conditions, such as pulmonary conditions, transplants, and more joint replacements than are currently on the list. The reasons these IRF officials gave for adding these conditions included that these patients can become weakened physically during a hospital stay and need services in an IRF to regain their strength and also that their experience shows they can achieve good outcomes for these patients. The remaining IRF officials said no conditions should be added. Some reasons they cited were that these patients can be treated in a less intensive setting, the conditions are too broad to be meaningful, and using a list of conditions is the wrong approach. IRF officials differed regarding the addition of joint replacement patients. Half of them suggested that joint replacement be more broadly defined to include more patients, saying, for example, that the current requirements were too restrictive and arbitrary, and a couple of them said that unilateral joint replacement patients are not generally appropriate for IRFs. The experts IOM convened contended that condition alone was insufficient for identifying which patients, or types of patients, required the level of services available in an IRF and generally agreed that functional status should also be used. A patient’s condition was perceived as an acceptable starting point to understanding patient needs and as a way to characterize the patients served by IRFs. But the experts IOM convened generally agreed that condition, by itself, was insufficient and that more information was needed. They said that condition alone fails to identify the subgroup within each condition that is most appropriate for intensive inpatient rehabilitation. For example, one of them noted that although an IRF could be filled with patients that have conditions on the list in the rule, the patients could be completely inappropriate for that setting. Another expert at the meeting reported general agreement among the group that using diagnosis alone is not sufficient. In addition to the experts convened by IOM, other experts we interviewed also said that condition alone was insufficient because having a condition on the list in the rule does not automatically indicate the need for intensive inpatient rehabilitation (e.g., even though stroke is on the list, only a subgroup of stroke patients require IRF services) and having a condition not on the list does not necessarily mean the patient does not need IRF services (e.g., although there is no cardiac condition on the list, a subgroup of cardiac patients need the level of services of an IRF). In addition, the FI and IRF officials we interviewed generally reported as well that condition alone was insufficient. Over half the FI officials we interviewed said that condition is insufficient by itself to determine the need for intensive inpatient rehabilitation, and some said that diagnosis is only a starting point. As noted earlier, all the IRF officials reported using a variety of criteria, beyond condition, to assess patients for admission, including function. Among the experts convened by IOM, functional status was identified most frequently as the information required in addition to condition. Half of the experts IOM convened commented on the need to add information about functional status, such as functional need, functional decline, motor and cognitive function, and functional disability. To measure both diagnosis and function, one of them suggested using the case-mix groups because they combine both dimensions. Experts we interviewed also raised some concerns, however, about using function as a measure of need for intensive inpatient rehabilitation. The concerns voiced by the FI officials we interviewed included the potential for abuse by qualifying more patients for admissions and the potential for difficulty in adjudicating claims. One FI official said that moving toward an assessment of functional status would require a better instrument than currently exists. Another expert we interviewed said that using only functional status could lead to including custodial patients that are currently in SNFs. Officials at CMS also expressed concerns regarding how to measure the need for intensive inpatient rehabilitation based on functional status because a patient can have a low functional status but not need intensive inpatient rehabilitation. Almost all the experts IOM convened said that IRF classification should include characteristics of the patients served, but a couple said that IRF classification should not include patient characteristics. Among those expressing the need to use patient characteristics, function was identified most often, although it was mentioned that it would be hard to operationalize. Some of the experts IOM convened also suggested that the percentage threshold be set at a lower level than 75 percent (for example, 60 percent or 65 percent) as a compromise until more information becomes available to modify the list in the rule. The experts IOM convened who opposed using patient characteristics to classify IRFs suggested that IRFs be classified with just the other six facility criteria, potentially looking at state licensure requirements for additional facility criteria that could be applied specifically to IRFs. These experts (as well as others we interviewed) said that no other facility is classified using both patient and facility characteristics and that IRFs are unique in being subjected to this approach. However, Medicare does classify other facilities that are exempt from IPPS using a characteristic about the patients served in those facilities. Furthermore, other experts at the meeting did not agree that the six certification criteria were sufficient for distinguishing IRFs since long-term care hospitals could likely meet these criteria as well. Our analysis of Medicare data shows that there are Medicare patients in IRFs who may not need the intensive level of rehabilitation services these facilities offer. Just over half of all Medicare patients admitted to IRFs in fiscal year 2003 were admitted for a condition that was not on the list in the 75 percent rule. Of those patients whose primary or comorbid condition was not on the list, the largest group was joint replacement patients whose condition did not fit the list’s specific criteria for joint replacement. The experts IOM convened and other experts we interviewed reported that unilateral, uncomplicated joint replacement patients rarely need to be in an IRF. These experts also reported that patients who may not need to be in an IRF may have been admitted because CMS has not been routinely reviewing the IRFs’ admission decisions to determine whether they were medically justified. Increased scrutiny of individual admissions through routine reviews for medical necessity following patient discharge could be used to target problems and vulnerabilities and thereby reduce the number of inappropriate admissions in the future. While some patients do not need to be in an IRF, the need for IRF services may be more difficult to determine for other patients. The experts convened by IOM called for more research to understand the effectiveness of intensive inpatient rehabilitation, reporting that the evidence for the effectiveness of IRF services varied in strength for conditions on the list and was particularly weak for certain orthopedic conditions. CMS has also recognized the need for more research in this area and asked NIH to convene one meeting to help identify research priorities for inpatient rehabilitation. Research studies that can produce information on a timely basis, such as observational studies or meetings of clinical experts with specialized expertise, would be especially helpful in this effort. The presence of patients in IRFs who may not need that level of services and the calls for more research on the effectiveness of inpatient rehabilitation lead us to conclude that greater clarity is needed in the rule about what types of patients are most appropriate for rehabilitation in an IRF. There was general agreement among the experts we interviewed, including the experts convened by IOM, that condition alone is not sufficient to identify the most appropriate types of patients since within any condition only a subgroup of patients require the level of services of an IRF. We believe that if condition alone is not sufficient to identify the most appropriate types of patients, it would not be useful to add more conditions to the list at the present time. There was also general agreement among the experts that more information is needed to characterize appropriate types of patients, and the most commonly identified factor was functional status. However, some of the experts convened by IOM recognized the challenge of operationalizing a measure of function, and some experts questioned the ability of current assessment tools to predict which types of patients will improve if treated in an IRF. Despite the challenge, more clearly delineating the most appropriate types of patients would offer more direction to IRFs—and to the health professionals that refer patients to them—about which types of patients can be treated in IRFs. We believe that action to conduct reviews for medical necessity and to produce more information about the effectiveness of inpatient rehabilitation could support future efforts to refine the rule over time to increase its clarity about which types of patients are most appropriate for IRFs. These actions could help to ensure that Medicare does not pay IRFs for patients who could be treated in a less intensive setting and does not misclassify facilities for payment. To help ensure that IRFs can be classified appropriately and that only patients needing intensive inpatient rehabilitation are admitted to IRFs, we recommend that the CMS Administrator take three actions: CMS should ensure that FIs routinely conduct targeted reviews for medical necessity for IRF admissions. CMS should conduct additional activities to encourage research on the effectiveness of intensive inpatient rehabilitation and the factors that predict patient need for intensive inpatient rehabilitation. CMS should use the information obtained from reviews for medical necessity, research activities, and other sources to refine the rule to describe more thoroughly the subgroups of patients within a condition that are appropriate for IRFs rather than other settings, and may consider using other factors in the descriptions, such as functional status. In commenting on a draft of this report, CMS stated that our work would be of assistance to the agency in examining issues related to patient coverage and the classification of inpatient rehabilitation facilities. CMS generally agreed with our recommendations and provided technical comments, which were incorporated as appropriate. CMS agreed that targeted reviews for medical necessity are necessary and said that it expected its contractors to direct their scarce resources toward areas of risk. CMS said that it has expanded its efforts to provide greater oversight of IRF admissions through local policies that have been implemented or are being developed by the FIs. CMS also agreed with our recommendation to encourage additional research and noted that it has expanded its activities to guide future research efforts by encouraging government research organizations, academic institutions, and the rehabilitation industry to conduct both general and targeted research. CMS said that it would collaborate with NIH to determine how best to promote research. CMS also stated that, while it expected to follow our recommendation to describe subgroups of patients within a medical condition, it would need to give this action careful consideration because it could result in a more restrictive policy than the present regulations. CMS noted that future research could guide the agency’s descriptions of subgroups. Although CMS indicated its intention to follow this recommendation, we clarified the language in the recommendation to encourage CMS to obtain research and other information to undertake this effort. CMS’s written comments are reprinted in appendix IV. We also received oral comments on a draft of this report from representatives of the American Hospital Association, the American Medical Rehabilitation Providers Association, and the Federation of American Hospitals. All three groups noted that we applied the criteria for a rule that was effective July 1, 2004, to data from fiscal year 2003, when IRFs were operating under a different list of conditions. They stated that a difference between the lists of conditions in these 2 years was in the definition of polyarthritis, which affected the circumstances under which joint replacement patients were counted under the rule. They reported that in fiscal year 2003, IRFs admitted Medicare joint replacement patients who they believed were within the criteria of the rule in effect at that time, but may not have been within the criteria of the rule that took effect July 1, 2004. In its technical comments, CMS also raised concerns about our use of fiscal year 2003 data. We analyzed the admission of joint replacement patients to IRFs and found no material change between the same time periods in 2003 and 2004, as noted in the report. In addition, all three groups supported the call for more research. The three groups also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Administrator of CMS and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7114 or Linda T. Kohn at (202) 512-4371. The names of other staff members who made contributions to this report are listed in appendix V. A facility may be classified as an IRF if it can show that, during a 12-month period at least 75 percent of all its patients, including its Medicare patients, required intensive rehabilitation services for the treatment of one or more of the following conditions: 1. Stroke. 2. Spinal cord injury. 3. Congenital deformity. 4. Amputation. 5. Major multiple trauma. 6. Fracture of femur (hip fracture). 7. Brain injury. 8. Neurological disorders (including multiple sclerosis, motor neuron diseases, polyneuropathy, muscular dystrophy, and Parkinson’s disease). 9. Burns. 10. Active, polyarticular rheumatoid arthritis, psoriatic arthritis, and seronegative arthropathies resulting in significant functional impairment of ambulation and other activities of daily living that have not improved after an appropriate, aggressive, and sustained course of outpatient therapy services or services in other less intensive rehabilitation settings immediately preceding the inpatient rehabilitation admission or that result from a systemic disease activation immediately before admission, but have the potential to improve with more intensive rehabilitation. 11. Systemic vasculidities with joint inflammation, resulting in significant functional impairment of ambulation and other activities of daily living that have not improved after an appropriate, aggressive, and sustained course of outpatient therapy services or services in other less intensive rehabilitation settings immediately preceding the inpatient rehabilitation admission or that result from a systemic disease activation immediately before admission, but have the potential to improve with more intensive rehabilitation. 12. Severe or advanced osteoarthritis (osteoarthritis or degenerative joint disease) involving two or more major weight bearing joints (elbow, shoulders, hips, or knees, but not counting a joint with a prosthesis) with joint deformity and substantial loss of range of motion, atrophy of muscles surrounding the joint, significant functional impairment of ambulation and other activities of daily living that have not improved after the patient has participated in an appropriate, aggressive, and sustained course of outpatient therapy services or services in other less intensive rehabilitation settings immediately preceding the inpatient rehabilitation admission but have the potential to improve with more intensive rehabilitation. (A joint replaced by a prosthesis no longer is considered to have osteoarthritis, or other arthritis, even though this condition was the reason for the joint replacement.) 13. Knee or hip joint replacement, or both, during an acute hospitalization immediately preceding the inpatient rehabilitation stay and also meet one or more of the following specific criteria: a. The patient underwent bilateral knee or bilateral hip joint replacement surgery during the acute hospital admission immediately preceding the IRF admission. b. The patient is extremely obese, with a body mass index of at least 50 at the time of admission to the IRF. c. The patient is age 85 or older at the time of admission to the IRF. In undertaking this work, we analyzed data on Medicare patients admitted to inpatient rehabilitation facilities (IRF) and also interviewed a wide variety of experts in the field to obtain various perspectives. We used several different sources of data, including data from the Centers for Medicare & Medicaid Services (CMS) about Medicare patients admitted to IRFs; interviews with officials at IRFs, fiscal intermediaries (FI), CMS regional offices, and private insurers; a 1-day meeting of clinical experts in the field of physical medicine and rehabilitation; and interviews with other clinical and nonclinical experts and researchers in the field of rehabilitation as well as officials from professional associations of various disciplines involved in inpatient rehabilitation. In total, during this engagement, we spoke with 106 individuals, of whom 65 were clinicians. We conducted our work from May 2004 through April 2005 in accordance with generally accepted government auditing standards. To identify the conditions that IRF patients have, we obtained from CMS the Inpatient Rehabilitation Facility—Patient Assessment Instrument (IRF- PAI) records for all IRF admissions of Medicare patients for fiscal year 2003 (October 1, 2002, to September 30, 2003), which have data on patient age and sex, impairment group code and case-mix group (CMG) classification, and comorbid conditions. To assess whether individual patients were considered to have 1 of the 13 conditions defined by the list of conditions in CMS’s 75 percent rule, we applied the criteria laid out in CMS’s Medicare Claims Processing Manual. This document lists the specific impairment group codes and ICD-9-CM diagnostic codes for comorbid conditions entered into the patient’s IRF-PAI record that were used to identify patients who belonged in the 13 conditions. We conducted our analyses on Medicare patients only because CMS records contained data on the largest number of IRFs and the majority of patients in IRFs are covered by Medicare. Prior work by RAND found that the percentage of Medicare patients with the conditions on the list in the rule was a good predictor of the percentage of total patients in the conditions on the list in the rule. We analyzed these data at the patient level to compare compliance with the rule across impairment groups. To permit a discrete assignment of each patient to one impairment group, we gave priority to the impairment group code designated at admission. To assess the extent to which Medicare patients in IRFs with joint replacements had comorbidities, we examined their distribution among the four payment tiers assigned under the prospective payment system for IRFs. The assigned CMG in the IRF-PAI data set includes a letter prefix that indicates that the patient either had no comorbidities related to the cost of providing inpatient rehabilitation or had one or more comorbidities expected to have a low, medium, or high impact on those costs. We calculated the proportion of joint replacement patients that fell into the no-comorbidity group, both overall and within each of the six joint replacement CMGs. To do our supplementary analysis on a sample of 2004 data, we compared the proportion of Medicare patients admitted to an IRF whose primary condition was joint replacement from July through December 2003 to the proportion of such patients from July through December 2004, using data from IRF-PAI records. We computed the proportion of Medicare patients admitted to IRFs that were joint replacement patients, ranked the facilities according to the proportion of Medicare joint replacement patients in 2003, and calculated the difference across the two time periods. To determine the number of IRFs that met the requirements of the 75 percent rule, we aggregated Medicare patients treated at the same IRF and calculated the total percentage of each IRF’s patients that were admitted with a primary condition or a comorbid condition on the list in the rule. We examined the distribution of compliance levels across IRFs, applying the different thresholds that the rule phases in over several years, but we did not assess the appropriateness of any threshold level. To determine whether any IRFs had ever been declassified based on failure to comply with the 75 percent rule, we interviewed officials at CMS’s 10 regional offices. Our analyses rely on Medicare billing information, and we determined that these data were sufficiently reliable for this analysis. We followed the instructions CMS provided to FIs to “presumptively verify compliance” using the list of codes in the Medicare Claims Processing Manual to estimate how many patients have one of the conditions on the list in the rule as recorded on the IRF-PAI instrument. FIs use the list of codes in this manual as a first step to estimate how many patients have one of the conditions on the list in the rule. To assess the reliability of the IRF-PAI records for our data analyses, we interviewed two researchers who had experience using the IRF-PAI data set, and performed electronic testing of the required data elements, including impairment codes, comorbid conditions, and admission dates. We examined the IRF-PAI data set and found few missing or invalid entries for the variables we used. We did not compare the information entered on the IRF-PAI to medical records. All of these analyses encompassed services provided in facilities located in the 50 states and the District of Columbia. To determine how IRFs assess patients for admission and how CMS reviews admission decisions for medical necessity, we interviewed the medical directors at 12 IRFs and the medical director or designee at 10 FIs. We used data from the RAND Corporation’s “Case Mix Certification Rule for Inpatient Rehabilitation Facilities” (2003), prepared under contract to CMS, to select our respondents out of a total of more than 1,200 IRFs. RAND had analyzed the level of compliance of each IRF with the rule using the 10 conditions on the list at that time. We used RAND data to create a sampling frame to select IRFs to interview, but we did not rely on RAND’s data for any findings or conclusions. We matched facilities with data from the IRF-PAI to identify them and sorted them by zip code according to the Northeast, Midwest, South, and West regions as defined by the U.S. Census Bureau. Within each region, we selected IRFs with a high, median, and low level of compliance with the 75 percent rule. We identified the median complier in each region, and if necessary adjusted the selection of IRFs to (1) avoid interviewing more than one IRF in the same state and (2) provide a selection of for-profit, freestanding, and rural facilities. If a selected provider was unwilling or unable to participate in the interview, we substituted the IRF next on the list that was most similar in characteristics to the facility originally chosen. We conducted a structured interview with the medical director of each facility, and provided unstructured time at the end of the interview for the respondent to raise other issues. For nonclinical questions that the medical directors were unable to answer, we spoke to a member of the administrative team. We identified the areas covered in the interviews through background interviews with professional associations, advocacy groups, CMS, and experts in inpatient rehabilitation and health policy research, and pretested the interview protocol with two IRFs not included in our sample. The FIs we selected to interview were those that serviced the states in which the IRFs we selected were located. Because some FIs serviced more than one state, our selection yielded 10 FIs (out of a total of 30). To facilitate our interviews, we spoke with the appropriate CMS regional office, which notified an official at each FI about this engagement. We conducted a structured interview with the medical director or designee regarding (1) appropriate patients for inpatient rehabilitation, (2) the list of conditions in the rule, (3) assessment for compliance, and (4) reviews for medical necessity. We pretested the interview protocol with three FIs that were not included in our sample. We also spoke with FI officials who had been identified as being interested in inpatient rehabilitation. All FI officials had the opportunity to discuss issues other than those we highlighted. To compare Medicare’s approach to the approaches of other payers, we selected a convenience sample of three insurers and one regional managed care organization to learn about their activities regarding inpatient rehabilitation. We interviewed officials from these payers, asking how they identified facilities for intensive inpatient rehabilitation, and how they identified appropriate patients for such services. Our interviews do not represent all concerns or experiences of inpatient rehabilitation facilities, FIs, or private payers, and the answers to the structured interviews were not restricted to Medicare patients. Because we were directed to examine the 75 percent rule and not directly to evaluate the relative value of inpatient rehabilitation, we did not ask questions about the full spectrum of postacute care. To evaluate the approach of using a list of conditions in the 75 percent rule to classify IRFs, we contracted with the Institute of Medicine (IOM) of The National Academies to convene a 1-day meeting of clinical experts broadly representative of the field of physical medicine and rehabilitation. We identified for IOM the categories of participants preferred at the meeting. To identify specific participants, IOM obtained input from us, IOM members, advocacy groups, and individual experts in the field. It identified a pool of participants according to the preferred categories. In total, 14 experts participated: 4 practicing physicians, 2 physical therapists, 2 occupational therapists, 1 speech therapist, 2 nurses, 1 physician/researcher in postacute care, 1 physician/researcher from a research institute, and 1 health services researcher. The meeting was facilitated by a physician/researcher with expertise in Medicare payment policy. Invitations to participate were issued by IOM. Participants were invited as individual experts, not as organizational representatives. The group was not asked to reach consensus on any issues, and IOM was not asked to produce or publish a report of the meeting. We observed the meeting and subsequently reviewed the transcript and audiotape of the meeting, listed the individual comments made during the meeting, and grouped the comments around a limited number of themes. The comments from the meeting of the experts IOM convened represent their individual statements and not a consensus of the group as a whole. In convening the meeting, IOM was not able to get participation of clinical experts who were not employed in IRFs (such as referring physicians or therapists in acute care settings) and a private payer. The comments of participants should not be interpreted to represent the views of IOM or all clinical experts in the field of rehabilitation. To examine the proportion of Medicare patients discharged from hospitals with different diagnosis-related groups (DRG) who went to IRFs for postacute care, we obtained CMS’s Medicare Provider Analysis and Review (MEDPAR) file that contained all Medicare inpatient discharges from both acute care hospitals and IRFs for fiscal year 2003. This file provided information on patient admission and discharge dates from acute care hospitals and rehabilitation facilities along with the DRG assigned for each acute care stay. We identified all the patients who entered IRFs within 30 days of their hospital discharge during fiscal year 2003 and calculated the frequencies for each DRG among them. We then selected the 19 DRGs that represented at least 1 percent of IRF admissions from acute care hospitals. Next we determined the total number of hospital discharges with those DRGs and computed the proportion of patients in each of these DRGs that were admitted to an IRF within 30 days. The analysis of acute hospital discharges required that we use the separate MEDPAR file that had information on inpatient DRGs and on patients who did not enter IRFs as well as those who did. The MEDPAR analysis may therefore reflect a slightly different IRF patient population from that reflected in the analyses conducted with the IRF-PAI data set. Apparent variations in the admission dates recorded for IRF patients in the two sets of data prevented us from combining data from each into one consolidated data set. Manuel Buentello, Behn Kelly, Ba Lin, Eric Peterson, Kristi Peterson, and Roseanne Price made key contributions to this report.
Medicare classifies inpatient rehabilitation facilities (IRF) using the "75 percent rule." If a facility can show that during 1 year at least 75 percent of its patients required intensive rehabilitation for 1 of 13 specified conditions, it may be classified as an IRF and paid at a higher rate than is paid for less intensive rehabilitation in other settings. Medicare payments to IRFs have grown steadily over the past decade. In this report, GAO (1) identifies the conditions--on and off the list--that IRF Medicare patients have and the number of IRFs that meet a 75 percent threshold, (2) describes IRF admission criteria and Centers for Medicare & Medicaid Services (CMS) review of admissions, and (3) evaluates use of a list of conditions in the rule. GAO analyzed data on Medicare patients (the majority of patients in IRFs) admitted to IRFs in FY 2003, spoke to IRF medical directors, and had the Institute of Medicine (IOM) convene a meeting of experts. In fiscal year 2003, fewer than half of all IRF Medicare patients were admitted for having a condition on the list in the 75 percent rule, and few IRFs admitted at least 75 percent of their patients for one of those conditions. The largest group of patients had orthopedic conditions, not all of which were on the list in the rule, which had been suspended in 2002. Almost half of all patients with conditions not on the list were admitted for orthopedic conditions, and among those the largest group was joint replacement patients. Although some joint replacement patients may need admission to an IRF, GAO's analysis showed that few of these patients had comorbidities that suggested a possible need for the IRF level of services. Additionally, GAO found that only 6 percent of IRFs in fiscal year 2003 were able to meet a 75 percent threshold. IRFs varied in the criteria used to assess patients for admission, and CMS has not routinely reviewed IRF admission decisions. IRF officials reported that the criteria they used to make admission decisions included patient characteristics such as function, as well as condition. CMS, working through its fiscal intermediaries, has not routinely reviewed IRF admission decisions. The experts IOM convened and other clinical and nonclinical experts GAO interviewed differed on whether conditions should be added to the list in the 75 percent rule but agreed that condition alone does not provide sufficient criteria to identify the types of patients appropriate for IRFs. The experts IOM convened questioned the strength of the evidence for adding conditions to the list, finding the evidence for certain orthopedic conditions particularly weak, and they called for further research to identify the types of patients that need inpatient rehabilitation and to understand the effectiveness of IRFs. Other experts did not agree on whether conditions, including a broader category of joint replacements, should be added to the list. Experts, including those IOM convened, generally agreed that condition alone is insufficient for identifying appropriate types of patients for inpatient rehabilitation, since within any condition only a subgroup of patients require the level of services of an IRF, and contended that functional status should also be considered.
The Navy determines the number of sailors and the skills needed to operate its ships through a standardized manpower requirements process. The Navy then mans the ships by filling the required positions— to the extent that the number and type of positions are funded and the trained and qualified personnel are available to fill them—as summarized in figure 1. This manpower requirements process is based primarily on the documents that lay out a ship class’s required operational capabilities and projected operational environment (i.e., the missions the ship will fulfill and how it will operate to carry them out). The Navy Manpower Analysis Center is the chief agent in determining manpower requirements by validating a ship’s primary workload; applying allowances to account for working conditions, among other factors; and computing the manpower requirements—the number and mix of positions needed to meet the Navy’s operational expectations. The Navy Manpower Analysis Center develops manpower requirements for new ship classes either after a ship’s first deployment or about 1 year after the ship has become operational, and publishes the validated requirements in Ship Manpower Documents. Navy Manpower Analysis Center officials reassess a ship’s manpower requirements to ensure that they are up to date every 5 years or after major capability upgrades, changes to allowances, or other changes. After the manpower requirements are determined for a ship, the Navy mans the ship by filling the required positions to the extent that the number and type of positions are funded, and the trained and qualified personnel are available to fill them. After the budgeting and sailor distribution process, a ship’s manning level may be lower than the manpower level that the manpower requirements process has determined was needed. The process by which manpower requirements are determined for shore-based personnel is described in appendix II. The Navy has tried several ways to reduce the size of ship crews in order to reduce costs. The optimal manning initiative, introduced as a pilot program on a cruiser and destroyer in 2001 and implemented fleet-wide on other surface and amphibious ships beginning in 2003, was intended to improve efficiency. Initially, optimal manning levels were often derived by changing watchstanding requirements. As an example, the number of watchstanders required to serve as battle station phone operators and stretcher bearers was reduced, and, as a result, 10 positions were removed from ships with these positions. Other watchstations were consolidated or eliminated. Between 2003 and 2007, the Navy transferred some administrative workload from ship to shore personnel, which further reduced the size of ship crews. This corresponding effort, known as Pay and Personnel Ashore, had the effect of moving two-thirds of the personnel specialist positions responsible for these administrative functions from ship crews to shore support units. To further drive down ship crew sizes, the Navy changed workload assumptions and the equation used to determine manpower requirements in 2002. For example, it increased the Navy standard workweek from 67 to 70 productive hours per sailor, which further reduced shipboard manning by up to 4 percent. A time line of reduced manning initiatives that were implemented from 2001 to 2016 is included in figure 2. In addition to reducing crew sizes on legacy ships through the means described, the Navy also designed its newest ship classes to operate with smaller crew sizes, relying on new technologies, automation, and shore support to enable these reductions. Profiles of new ship classes designed to operate with reduced crew sizes are included in appendixes III, IV, V, and VI. As noted in the Navy’s 2010 Fleet Review Panel report, a primary lesson of the optimal manning period is that using unvalidated assumptions to reduce crews contributed to the erosion of the material condition of the fleet. In response to these findings, the Navy has partially restored crew sizes on its legacy ships and has increased the size of shore units to better support its ships (see app. VII for information on shore support personnel). In addition, the Navy took several other steps to address the declining material condition of the surface fleet, such as the following: Establishing the Surface Maintenance Engineering Planning Program in 2010 to provide centralized life-cycle maintenance engineering for surface ships, maintenance and modernization planning, and management of maintenance strategies. The Navy also established the Commander, Navy Regional Maintenance Centers (CNRMC) in 2010, to coordinate the depot- and intermediate-level maintenance of its surface fleet. The goal of these efforts is to improve the material condition and readiness of the surface fleet and to adhere to a more disciplined deployment and maintenance schedule. Navy officials told us that, as a result of these initiatives, the Navy has developed a better understanding of its ships’ material condition and maintenance needs, and maintenance requirements have generally increased. Creating the Surface and Expeditionary Warfare Training Committee in 2013, which is to inform leadership of surface manpower and training investments, resourcing, acquisition, and execution. Officials said that program offices for new ships are now required to annually update manpower estimates and adjust manpower requirements based on lessons learned. Introducing a revised operational schedule known as the Optimized Fleet Response Plan in 2014, which was intended, among other things, to provide for the predictable scheduling of ship maintenance tasks and ensure that ship crews were manned with a sufficient number of sailors with the right qualifications. Ship operating and support costs—the total cost of operating, maintaining, and supporting a ship, including personnel, operations, maintenance, sustainment, and modernization—increased during the optimal manning period and have continued to increase for most ship classes, in part because increases in maintenance costs offset reductions in personnel costs. Since the end of the optimal manning period around 2010, the Navy has partially restored crew sizes, and personnel costs have increased for all ship classes. In addition, maintenance costs have increased for some ship classes and decreased for others, although maintenance costs are still above pre–optimal manning levels for all ship classes. Navy officials attributed maintenance cost increases to reduced crews, longer deployments, and other factors. Maintenance backlogs also increased during the optimal manning period for the same reasons and have continued to grow for most ship classes. During the optimal manning period—which varied among ship classes but generally was around fiscal years 2004 to 2010—the Navy reduced average crew sizes, as shown in figure 3, resulting in reductions in personnel costs. Since the end of optimal manning, the Navy has increased crew sizes, leading to increases in associated personnel costs. However, the crews and associated personnel costs for all ship classes— with the exception of dock landing ships (LSD 41/49–classes)—are still smaller than they were before the optimal manning initiative, in part because the Navy has retained the longer 70-hour workweek component for productive work that it had adopted during the optimal manning period, which results in a requirement for fewer crew members. Our analysis found that, at the same time that the Navy reduced crew sizes and personnel costs, average maintenance costs per ship increased for all ship classes. These increases more than offset the decreases in personnel costs that were achieved during the optimal manning period. Since the Navy ended the optimal manning initiative, the change in maintenance costs has varied; maintenance costs continued to increase for cruisers and destroyers, but have decreased for aircraft carriers, amphibious assault ships, and dock landing ships. In all cases, maintenance costs are above pre–optimal manning levels, as shown in figure 4. Further, our analysis found that overall operating and support costs increased for all classes during the optimal manning period and have continued to increase for most ship classes since optimal manning ended. This increase was driven in part by increases in maintenance costs offsetting decreases in personnel costs. Navy officials acknowledged that the reduced crew sizes during the optimal manning period, along with reductions in shore support, may have yielded short-term cost savings, but also increased maintenance costs over the longer term, in part because reduced crew sizes resulted in maintenance being deferred, which developed into more costly issues that had to be addressed later. Navy officials also attributed increases in maintenance costs to increased deployment lengths, increased reliance on contractors to perform maintenance, and some class-specific maintenance and modernization efforts. Other factors, such as the age of a ship, may also affect maintenance costs. Our analysis does not isolate the effects of these factors from the effect of the optimal manning initiative. Navy officials told us that shifts from organizational- and intermediate- level to depot-level maintenance increased overall maintenance costs. As noted above, this change occurred in part because reduced crew sizes resulted in minor maintenance being deferred, which developed into more costly issues that had to be addressed later at the depot level. Our analysis of Navy maintenance costs found that intermediate-level maintenance costs increased for most classes during the optimal manning period, and depot-level maintenance costs increased for all classes, as shown in figures 5 and 6. Depot maintenance costs have continued to increase for most classes and are above pre–optimal manning levels for all classes as of fiscal year 2015. Navy officials also acknowledged that reduced manning is enabled by an increased reliance on outside entities, such as contractors, to perform maintenance. Our analysis found that the cost of maintenance performed by contractors and in private shipyards increased for all ship classes during the optimal manning period and has continued to increase for most ship classes since crew sizes were restored. However, increases in contractor costs have been driven primarily by the increase in depot-level maintenance. The Navy generally contracts with private shipyards and other firms for the repair, maintenance, and modernization of nonnuclear surface ships. Private shipyards and contractors are generally focused on depot-level maintenance, and are responsible for most depot-level maintenance. As a result, increases in depot-level maintenance have driven increases in contractor maintenance costs. Contractor maintenance costs at the intermediate level decreased for most classes during the optimal manning period and have continued to decrease for most classes in the period since. The Navy’s 2010 Fleet Review Panel found that reduced manning prevented ship crews from performing the minimum required level of preventive maintenance, resulting in a growing maintenance backlog—a measure of the deferred maintenance for a particular ship—as well as increased equipment malfunctions (i.e., casualty reports). Navy officials have also acknowledged that the reduced crew sizes during the optimal manning period, along with increased deployment lengths, contributed to decreases in the material condition and readiness of ships. Our analysis of Navy maintenance backlog data found that backlogs increased for all ship classes during the optimal manning period, as shown in figure 7. While increases in backlogs were occurring before the optimal manning initiative, these increases accelerated during the optimal manning period for most ship classes. Since optimal manning ended, backlogs have continued to increase for most ship classes, but the rate of increase has slowed for most classes. Although Navy officials told us that reductions in manning can affect maintenance backlogs, they have not quantified the magnitude of that relationship. The Fleet Review Panel also noted that casualty reports increased during the optimal manning initiative. Our previous work found that casualty reports continued to increase following the end of optimal manning. In 2015, we found that casualty reports had nearly doubled for cruisers, destroyers, and amphibious ships between January 2009 and July 2014. According to Navy officials, their initiatives to improve ship material condition are beginning to make progress, and Navy documentation we reviewed shows that the numbers of surface ship casualty reports decreased between July 2014 and December 2016. Another measure of ship material readiness, the Board of Inspection and Survey’s Figure of Merit Scores, has generally improved since the Navy ended the optimal manning initiative. The Navy has updated some of the factors it uses to determine the manpower required on its ships, but its process does not fully account for all ship workload. The Navy continues to use the workweek standard adopted during the optimal manning period, which increased the hours for productive work from 67 to 70 hours a week. This change was part of what enabled the Navy to reduce crew sizes. However, a 2014 Navy study indicated that this standard may be outdated. Although the Navy has updated some manpower factors, its guidance on total force manpower policies and procedures, Office of the Chief of Naval Operations (OPNAV) Instruction 1000.16L, does not require that these factors be reassessed to ensure that they remain current and that ship crews are sized appropriately. Further, the Navy’s manpower requirements process does not account for growing in-port workload, which is distributed among fewer crew members than when ships are at sea. Since it ended the optimal manning initiative, the Navy has updated or is in the process of updating several of the factors and allowances it uses to determine manpower requirements on all ships, but it has not updated the standard workweek. In 2012, the Navy Manpower Analysis Center studied the “make ready / put away” allowance, which accounts for the time needed to prepare and close out of a maintenance activity. The center recommended increasing the allowance from 15 percent to 30 percent of the total preventive maintenance man hours on a ship, and the Navy began implementing this change in 2013. Navy manpower officials found that, over the years, changes to regulations, instructions, and basic safety requirements had increased the time it takes for sailors to perform duties associated with this allowance. In addition, the Office of the Chief of Naval Operations directed Navy manpower officials to update the productivity allowance, which accounts for delays arising from fatigue and work interruptions, among other factors. They increased the allowance from a range between 2 and 8 percent of productive work requirements to a range between 2 and 20 percent for selected ship classes. This change to the productivity allowance accounts for a new measure of mental fatigue associated with monitoring technology. The Navy is examining other factors—the corrective maintenance allowance, ship aging factors, and its pay-grade distribution model. Table 1 shows the status of the factors in the Navy manpower requirements model. Although the Navy has updated several of its manpower factors, it has not made any changes to the standard workweek that it adopted during the optimal manning period. In 2002, the Navy changed the portion of the standard workweek allocated for sailors to perform productive work, which is used, in part, to determine manpower requirements and calculate the size of the crew. By increasing the time allotted for productive work in a standard workweek, the Navy reduced the number of personnel on its surface and amphibious ships. In 2010, we found that the Navy had adjusted the workweek without sufficient analysis, and we recommended that it reassess the standard workweek to assure that the Navy was appropriately sizing ship crews. The Department of Defense (DOD) agreed with our recommendation. In 2014, the Navy conducted a study of the standard workweek and identified significant issues that could negatively affect a crew’s capabilities to accomplish tasks and maintain the material readiness of ships, as well as crew safety issues that might result if crews sleep less to accommodate unaccounted for workload. The Navy study found that sailors were on duty 108 hours a week, exceeding their weekly on-duty allocation of 81 hours. This on-duty time included 90 hours of productive work—20 hours per week more than the 70 hours that is allotted in the standard workweek. This, in turn, reduced the time available for rest and resulted in sailors spending less time sleeping than was allotted, a situation that the study noted could encourage a poor safety culture. Figure 8 shows how sailors actually spent their time compared to the time allotted for each component in the Navy standard workweek, as reported in the Navy’s 2014 study. An example of work that is not accurately accounted for in the workweek is time spent by experienced personnel providing on-the-job training or time spent by new arrivals receiving this training. Navy manpower calculations do not include on-the-job training, and it is not accounted for in the 7 hours allocated for training in the standard workweek. Navy officials and crew members we interviewed told us that sailors often arrive to their assigned ship without adequate skills and experience. Crew members in 10 of the 12 crew interviews we conducted told us that more experienced sailors routinely provide on-the-job training for less experienced sailors, so the time doing this must come out of sleep, personal time, or other allotted work time. In addition, Navy officials said that the time allocated for administrative and other duties should be greater, because it does not account for all of a sailor’s collateral duties. Similarly, the 2014 Navy study concluded, among other things, that the Navy lacked support for the time needed for some workweek components, and recommended that they be better supported by documentation. However, as of February 2017, the Navy had not taken action to validate the standard workweek, as we and its own study had recommended. Navy officials said that they had not taken any action in response to the 2014 study’s recommendations because the study’s narrow scope of three ships limited its applicability across the fleet. OPNAV Instruction 1000.16L specifies the total time available to accomplish the required workload, which is a key element in the calculation of manpower requirements. According to the Navy instruction, the process for determining the manpower necessary to perform the required workload is to be based on a validated and justifiable technique; that is, it should be analytically based. Without an analytically based standard workweek that accounts for all of the work that a sailor is expected to do, the Navy runs the risk of negatively affecting the condition of the ship, overworking sailors, and adversely affecting morale, retention, and safety. The Navy instruction does not require the factors used to develop ship manpower requirements to be reassessed periodically or when conditions change. The effect of this absence is that inaccurate factors can persist in developing manpower requirements. Factors and allowances are used to calculate manpower requirements; thus, if these factors are inaccurate, the resulting manpower requirement will be inaccurate. Our prior work found that the changes the Navy made to several of these factors in 2002 were not substantiated with analysis. As a result, the Navy was using these unsubstantiated factors for at least a decade without reassessment, leading it to underestimate its manpower requirement and underman its ships, and the Navy found that reductions in crew sizes over the optimal manning period adversely affected ship condition. Prior to recent reassessments of the make ready / put away allowance and productivity allowance, some factors had not been reassessed and updated in decades—even though there had been changes to how the Navy trains, operates, and uses technology that affected the validity of these factors. Navy officials told us that part of the reason they had not reassessed the factors until directed to do so is that the relevant Navy instruction does not require that they be reassessed periodically or when conditions change, and they explained that having up-to-date factors would be useful to ensure that sailor workload could be accurately captured. If there was a requirement to reassess these factors, then the unsubstantiated changes made to them in 2002 may have been corrected sooner and some of the negative effects of the resulting undermanned crews could have been curtailed or avoided. Additionally, a reassessment requirement could prevent inaccurate factors like the standard workweek from continuing to be used across the fleet. The Navy estimated in 2017 that if it were to revert to the analytically based standard workweek in effect before 2002, more than 1,200 additional sailors would be required across the surface fleet. A memorandum from the Under Secretary of Defense for Personnel and Readiness states that, when developing strategic manpower plans, manpower officials shall assess how changes to roles, missions, and management strategies will affect workloads and require a change to the manpower, and that manpower officials shall be consulted concerning manpower adjustments, including changes to missions, priorities, and technologies. DOD Directive 1100.4 states that it is DOD policy that new policy shall be evaluated before implementing to decide its effect on manpower and personnel performance. The directive further states that existing policies, procedures, and structures shall be periodically evaluated to ensure efficient and effective use of manpower resources. Unless the OPNAV instruction used by the Navy to develop its manpower requirements requires that the factors be reassessed periodically or when conditions change, the Navy manpower requirements model will not reflect changes in training, technology, or regulations that occur over time and that affect sailor workload. Requiring that these factors be reassessed periodically or when conditions change would help ensure that they are accurate and current, and result in more accurate manpower requirements. Without accurate manpower requirements, the Navy risks having ship crews that are not appropriately sized and composed to carry out missions, maintain ship readiness, and prevent overwork of sailors. OPNAV Instruction 1000.16L calls for measuring only a ship’s at-sea workload and not its in-port workload. The Navy has traditionally assumed that at-sea workload is greater. However, we reported in 2010 that in-port workload had increased for a number of reasons, including the addition of new watchstanding requirements for Anti-Terrorism Force Protection. We recommended that the Navy include the relative magnitude of in-port and at-sea workload in its assessment of the underlying assumptions and standards it uses to calculate manpower requirements, and DOD agreed with this recommendation. During our current review, we found that in-port workload is still not captured in the process and is a persistent problem for crews, who must complete this workload with fewer sailors than when at sea, and whose time is also in demand for addressing other in-port priorities. The Navy has not measured in-port workload and therefore cannot determine the manpower requirements needed to execute this workload. Navy operational capability documents describe the in-port period as the time for the crew to accomplish required maintenance; take maximum advantage of training; and be provided the maximum opportunity for rest, leave, and liberty. Officers and enlisted personnel from all 12 of the crew interviews we conducted told us that sailors were overworked in port. Sailors consistently said that there were fewer crew members in port than during deployment, because sailors were attending training and taking leave, or because the Navy was prioritizing the manning of ships on deployment over ships in port. For example, one ship department had 5 crew members while in port compared with 10 to 12 crew members during deployment, so workload had to be redistributed among the remaining sailors. In addition, sailors from a supply department said that their workloads on the ship were the same when in-port and when on deployment, but there were fewer sailors available in port to execute the workloads. As in our 2010 review, crew members cited Anti-Terrorism Force Protection watchstanding requirements as creating additional training and work demands on them, and added that standing these watches in port comes at the expense of their other work. Both officers and enlisted personnel told us that ship crews are stressed and overburdened during in-port periods because they must stand watch and cover the workload of multiple sailors. Crew members told us that when they returned from deployment, this additional workload placed a strain on them and their families, affecting crew member morale. During the course of our review, in December 2016, Navy manpower officials began a study on the nature and amount of in-port workload; they are scheduled to complete this study in July 2017. The Navy directed the in-port workload study to inform development of its new training initiative, known as Ready Relevant Learning, to begin implementation in fiscal year 2017. However, Navy officials are still uncertain how this new approach to training will be managed, and officials have expressed concerns about its potential effects on in-port workload and the effects of having sailors who are not fully trained arrive for duty on their assigned ships. Although the Navy is currently in the process of measuring in-port workload, officials said that there are no efforts planned to use the study results to translate in-port workload into manpower requirements, and that a future determination will be made as to the implementation of any results of the study. OPNAV Instruction 1000.16L requires that the Navy determine at-sea manpower requirements, but does not require the Navy to determine—nor does it have a formal process or protocol to model—in- port manpower requirements. Without identifying the manpower needed to execute in-port workload, the Navy risks overworking its sailors during in-port periods and having this workload executed without the appropriate number and mix of sailors, which in turn may affect ship readiness, safety, and sailor morale. Moving forward, the Navy will likely face manning challenges, especially given its current difficulty in filling authorized positions, as it seeks to increase the size of its fleet as much as 30 percent over its current size. Moreover, new ship classes being introduced now are sometimes requiring more personnel than originally estimated as the Navy gains experience with the ships. Navy officials stated that even with manpower requirements that accurately capture all workload, the Navy will be challenged to fund these positions and fill them with adequately trained sailors at current personnel levels. Even with the reduced personnel authorized since optimal manning, the Navy has had difficulty filling authorized personnel slots, called “billets” in the Navy. The Navy’s commands responsible for manning, equipping, and training the surface fleet have cited the lack of personnel available to be distributed to ships as their primary challenge. Unfilled positions on ship crews and in shore support positions result in workload that must be redistributed among the remaining crew and also represent skills and abilities that are absent from a crew, exacerbating the risks associated with smaller authorized crews. Officials said that it is not uncommon for billets to remain unfilled for 6 months or more and that shore commands are more likely to experience such “gapped billets” for even longer periods. A 2014 Naval Audit Service report examined critical gapped billets, based on a concern that shortfalls among senior enlisted personnel made it impossible to meet shipboard manning requirements. The report found that the Navy has taken actions to reduce gapped billets, but the issue persists. As a result, gapped billets continue to exist, sailors may be required to work longer hours to make up for gapped billets, and junior sailors may not be receiving needed supervision. The report concluded, among other things, that unless the Navy increases enlisted personnel, recurring gaps will not be corrected. Given the continued demand for ships to support combatant commanders, the Navy plans to increase its fleet from 274 ships (as of March 7, 2017) to 308 ships by 2021. As of March 2017, the Navy had an end strength of 323,197 active-duty personnel. According to the Navy, this number is expected to remain largely flat through 2021, even though an increasing number of ships are entering the fleet. Navy officials have expressed concern about the growing gap between end strength and ship numbers, and said that the Navy would have to increase its end strength in order to adequately man its ships. Figure 9 shows the Navy’s projected end strength and fleet size. The Navy has also identified the need for an even larger fleet, which would add to personnel needs and costs. Specifically, the Navy released an updated Force Structure Assessment in late 2016 that called for a 355-ship fleet to meet global threats—a 15 percent increase from the previous 308-ship goal and a 30 percent increase from the size of its current fleet. In a February 2017 report, the Congressional Research Service estimated the additional shipbuilding costs that would be needed over a 30-year period based on the Force Structure Assessment, but added that these additional shipbuilding funds are only part of what would be needed to achieve and maintain a 355-ship fleet instead of 308-ship fleet. According to DOD, operating and support costs—which include personnel and maintenance costs—have traditionally constituted about 70 percent of a ship’s total life-cycle costs. Our analysis has shown that personnel costs were the largest share of total operating and support costs for surface ship classes between fiscal years 2000 to 2015 (see app. VIII for total ship operating and support costs). The underlying cause for this apparent ship–personnel mismatch is that the Navy is seeking to grow its fleet but is not fully assessing the personnel implications of the growth. Navy officials told us that it mans its ships and all other positions within its approved end strength, but has not determined the number or cost of personnel needed to man the increasing number of ships or made concrete plans for adding the needed personnel. The personnel needs will be significant. The Congressional Research Service estimated that about 15,000 additional sailors and aviation personnel might be needed to man the 47 additional ships above the previous 308-ship plan. Plans to grow the fleet further to 355 ships—and our findings that manpower validation processes are based on questionable assumptions that likely understate personnel needs— could further exacerbate the mismatch. However, the Navy has not fully assessed whether the service will need increased end strength and, if so, how much. Navy officials told us that if overall Navy end strength is not increased, the billets would likely have to be taken from other organizations as new ships are delivered, potentially continuing to perpetuate the gapped billets challenge. Our prior work has shown that identifying needed resources and investments is a key characteristic that helps to establish a comprehensive, results-oriented management framework to guide implementation of plans and strategies. This activity includes identifying what a strategy will cost and the sources and types of resources and investments associated with the strategy. According to Navy officials, in order to compensate for the lack of distributable personnel who would be needed to fill all manpower requirements within the current end strength, they currently prioritize which positions to fill and which to keep unfilled in order to maintain a permissible level of risk and readiness in the surface fleet. As the Navy continues to update ship manpower requirements based on recent changes to the factors and allowances used to calculate them, these requirements are likely to increase. Already-strained manpower resources will be even more stressed as the Navy commissions increasing numbers of ships without a commensurate increase in personnel. Unless it updates its manpower factors and requirements, and identifies the personnel cost implications associated with any planned increases in the fleet size, the Navy will not be positioned to accurately articulate internally within DOD or externally to Congress the personnel needs of the Navy. In addition to using the outdated standard workweek and not accounting for in-port workload, the Navy developed estimates of manpower requirements and crew size targets for its new ships based on assumptions that technologies would enable smaller crews. However, crew sizes on most new ship classes have grown over time as anticipated workload reductions from new technologies have not materialized and the Navy gains more experience operating the new ships. These technologies include networks that integrate ship systems to allow for remote monitoring, redesigned propulsion systems on some ships, and extensive use of automation to relieve crews of some manual work; however, some of these technologies are still not fully developed, tested, or fielded and remain immature. As a result, crew sizes have grown to allow sailors to do this manual work. For example, crew sizes for the Littoral Combat Ship (LCS), Zumwalt- class destroyer (DDG 1000), and San Antonio–class Amphibious Transport Dock (LPD 17) have increased since these ships entered service, as shown in table 2, and LCS and DDG 1000 have reached the upper limits for crew size as laid out in their acquisition strategies. Navy officials acknowledged that LCS and DDG 1000 crew sizes have grown due to the inadequacy of the original manpower assumptions coupled with additional mission requirements to support ship operations. The new Ford-class aircraft carrier (CVN 78) has not yet entered service, and its crew size so far remains within the Navy’s targets—currently 663 sailors below that of legacy Nimitz-class carriers. However, some planned features of the ship that were expected to reduce workload have been canceled, and delays in developing and testing some of the new technologies on the ship create unknowns about their ability to enable a smaller crew. See appendixes III, IV, V, and VI for specific information on each new ship class. The LCS program illustrates how crew size can grow over time as the Navy gains operational experience with the ship class and its new technology. The Navy originally designed and built these ships to accommodate a total crew size of 75, but gradually increased the ships’ crews as it gained more experience operating them, and has since had to redesign the ships to accommodate 98 sailors—a 31 percent increase. As of March 2017, three of the Navy’s nine LCS ships have been deployed overseas. Automation and the use of condition-based maintenance have not decreased workload as they were intended to do, and the unreliability of shipboard systems has led to major equipment failures and unanticipated corrective maintenance. Officers and enlisted crew members told us that the LCS’s minimally sized crews are challenged to complete their workload. In 2014, we found that the LCS program had a number of manning challenges and that without validating the crew size and composition for all LCS crews and without accounting for the full scope and distribution of work performed by sailors across all ship departments, the Navy risked that crew fatigue would exceed Navy standards and could negatively affect crew members’ performance as well as morale, retention, safety, and ultimately the operational readiness of the ship class. In response to LCS manning and other challenges, the Navy conducted a program review in 2016 and announced changes to the ships’ crewing and other operational concepts that are now being implemented across the program. LCS officials told us that some of the program changes are meant to alleviate the heavy workload of LCS sailors. Specifically, the Navy has formed LCS maintenance execution teams to assist with heavy in-port workload, build organic expertise, decrease dependence on maintenance contractors, and serve as a pool of qualified sailors who can fill in for unplanned losses in LCS crews. Officials responsible for implementing program changes told us that they are in the process of determining the composition of LCS integrated crews, and are using all available inputs and information to determine the best mix of sailors. However, Navy manpower officials have yet to validate these changes to the LCS crewing concept, and delays in LCS mission module development and testing do not allow for them to validate needed crew size and composition since the modules are immature. Navy officials told us they validate manpower requirements for new ship classes after testing is complete and the first ship of the class has been deployed. Most new ship classes have unvalidated manpower requirements due to lack of operational experience or system immaturity. Table 3 summarizes the status of manpower requirement validation for new ship classes with reduced crews. As noted above, crew sizes on three of the Navy’s four new ship classes have grown partly because the technologies in use have not led to the expected reductions in workload. In the case of LCS and LPD 17, the lack of physical space limits the ability of the crews to grow further without significant redesign of ship interiors. The DDG 1000 crew has reached its upper crew size target, but program officials have said that the ship could accommodate additional sailors as the ship gains more operational experience—if it is determined that they are necessary. CVN 78 crews may also grow until technologies meant to reduce workload and crew sizes mature. Until technologies on new ships are mature and demonstrate their ability to decrease workload, crew sizes on new ships may continue to grow, placing further pressure on the Navy’s resources. During the optimal manning period of the early 2000s, the Navy made changes to its manpower requirements process that were intended to drive down crew sizes and thus save on personnel costs. However, these changes were not substantiated with analysis. The result was that with fewer sailors operating and maintaining surface ships, the material condition of the ships declined, and this effect ultimately contributed to increased overall operating and support costs. The Navy has reassessed and reversed some of the changes it made during the optimal manning period, but it continues to use a workweek standard that does not reflect the actual time sailors spend working, and the Navy still does not account for in-port workload—both of which may be leading to sailors being overworked and creating a readiness and safety risk. In addition, the Navy’s guidance does not require that the factors used to calculate manpower requirements be reassessed periodically or when conditions change to ensure that these factors remain valid and crews are appropriately sized. A requirement to reassess these factors would help ensure that they stay current and analytically based, and would provide the Navy a sound basis for its manpower requirements. Looking to the future, the Navy plans to grow its fleet as much as 30 percent but has not determined how many personnel will be needed to man the larger fleet or what these personnel will cost. As the number of ships increases—and if crew sizes continue to grow on new ship classes—the Navy will be challenged to distribute its sailors across the fleet without an increase in personnel. Unless it identifies the personnel needs and costs associated with a larger fleet size, the Navy runs the risk of buying ships that it cannot fully man, potentially repeating the mistakes associated with the optimal manning period and resulting in degraded surface fleet readiness and increased maintenance costs. To ensure that the Navy’s manpower requirements are current and analytically based and will meet the needs of the existing and future surface fleet, we recommend that the Under Secretary of Defense for Personnel and Readiness direct the Secretary of the Navy to have the Navy take the following four actions: conduct a comprehensive reassessment of the Navy standard workweek and make any necessary adjustments; update guidance to require examination of in-port workload and identify the manpower necessary to execute in-port workload for all surface ship classes; develop criteria and update guidance for reassessing the factors used to calculate manpower requirements periodically or when conditions change; and identify personnel needs and costs associated with the planned larger Navy fleet size, including consideration of the updated manpower factors and requirements. We provided a draft of this report to DOD for review and comment. In its comments, reproduced in appendix IX, DOD concurred with our recommendations, citing its commitment to ensuring that the Navy’s manpower requirements are current and analytically based and will meet the needs of the existing and future surface fleet. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; and to the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix X. To describe trends in Navy crew sizes and operating and support costs on its legacy ships, we analyzed annual data from fiscal years 2000 through 2015 (the most current data available at the time of our review) from the Navy’s Visibility and Management of Operating and Support Costs (VAMOSC) system. We included all classes and flights of surface ships that were (1) in service during the optimal manning period, (2) subject to crew size reductions during that period, and (3) still in service as of fiscal year 2015. The following ship classes and flights were included in our analysis: Nimitz-class (CVN 68) Aircraft Carriers; Arleigh Burke–class (DDG 51) Destroyers (including Flights I, II, and Ticonderoga-class (CG 47) Cruisers; Wasp-class (LHD 1) Amphibious Assault Ships; and Whidbey Island– (LSD 41) and Harpers Ferry–class (LSD 49) Dock Landing Ships. As noted in our report, the years of the optimal manning period varied among ship classes. To determine the optimal manning period for each class, we analyzed Navy documentation and data on crew levels for each ship class. Based on this analysis, we defined the optimal manning period as the following for each class, and used these years in our analyses of changes during the optimal manning period: Nimitz-class (CVN 68) Aircraft Carriers: fiscal years 2005–2012; Arleigh Burke–class (DDG 51) Destroyers (including Flights I, II, and IIA): fiscal years 2004–2010; Ticonderoga-class (CG 47) Cruisers: fiscal years 2003–2010; Wasp-class (LHD 1) Amphibious Assault Ships: fiscal years 2005– Whidbey Island– (LSD 41) and Harpers Ferry–class (LSD 49) Dock Landing Ships: fiscal years 2006–2010. For our analysis, we used the following elements in the VAMOSC database: Crew size: “Number of Personnel—Navy.” Total operating and support costs: all cost elements. Personnel costs: all cost elements within “Unit-Level Manpower.” Maintenance costs: all costs elements within “Maintenance”: Organizational-level maintenance: “Consumable Materials and Repair Parts” and “Depot Level Repairables.” Intermediate-level maintenance: all cost elements within “Intermediate Maintenance.” Depot-level maintenance: all costs elements within “Depot Maintenance.” Maintenance performed by private shipyards and contractors: “Intermediate-Level Contractor Maintenance,” all cost elements for private shipyards within “CNO-Scheduled Depot Maintenance,” and all cost elements for private shipyards within “Fleet Depot Maintenance.” Other operating and support costs: all costs elements within “Unit Operations,” “Sustaining Support,” and “Continuing System Improvements.” We reviewed trends in these elements for each ship class in our scope. We also calculated the change in each element during and since the optimal manning period, as well as the total change since the beginning of the optimal manning period, for each ship class in our scope, as described below: change during optimal manning period is calculated as the change in dollars and percent from the pre–optimal manning level and the last year of optimal manning for a ship class; change since optimal manning period is calculated as the change in dollars and percent from the last year of optimal manning for a ship class and fiscal year 2015; and total change since start of optimal manning period is calculated as the change in dollars and percent from the pre–optimal manning level for a ship class and fiscal year 2015. To describe maintenance and ship material condition trends, we analyzed maintenance backlog, casualty report, and inspection result data from 2000 through 2015, as specified below: For maintenance backlog data, we requested data on the number of maintenance backlog items for each ship in our scope and calculated class and type averages of the number of maintenance backlog items as of September 30 of each fiscal year. To analyze the change in maintenance backlogs in the pre–optimal manning, optimal manning, and post–optimal manning periods, we compared the average annual rate of change in the number of backlog items for each ship type during each period, as defined above for each ship class. For casualty report data, we reviewed Navy reports and other documentation, as well as our prior work, which reported on trends in casualty reports during the optimal manning and post–optimal manning periods. For inspection report data, we compared average scores for the Board of Inspection and Survey’s Figure of Merit for the optimal manning and post–optimal manning periods for each ship type. According to board officials, changes to the inspection criteria in 2003 resulted in an increase in scores from 2004 onward. As a result, we did not compare Figure of Merit scores from before the optimal manning period to those during the optimal manning period. We assessed the reliability of the Navy’s VAMOSC, maintenance backlog, casualty report, and inspection report data and found them to be reliable for the purposes of describing trends and making comparisons over time in ship crews, operating and support costs, shore support personnel, and material conditions. Specifically, we reviewed prior GAO reports making use of these data, interviewed Navy officials with knowledge of the data, and reviewed documentation on the data and related systems. Where possible, we also corroborated the data with other data sources. To analyze trends in shore support personnel, we requested that officials from the Office of the Chief of Naval Operations’ Expeditionary Warfare (N95), Surface Warfare (N96), and Air Warfare (N98) directorates identify those shore support units that provided support specific to amphibious ships, surface combatants, and aircraft carriers. We then analyzed data from VAMOSC on trends in the number of full-time-equivalent military personnel assigned to these units from fiscal years 2002 to 2015. As part of this analysis, we also analyzed trends in units responsible for training as well as trends in units that are associated with Navy regional maintenance centers. To assess the extent to which the Navy’s manpower requirements process fully accounts for ship workload, we examined the factors and assumptions used in determining crew sizes for surface and amphibious ships, and we analyzed various Navy documents and instructions related to determining crew sizes, including Office of the Chief of Naval Operations Instruction 1000.16L, Navy Total Force Manpower Policies and Procedures, in order to identify the steps required in the Navy’s process to determine crew sizes. Furthermore, we reviewed prior GAO work on shipboard and shore-based manpower requirements determination, as well as previous Navy studies on the process, including on the sufficiency of its factors. We also interviewed Navy officials to discuss their process in determining manpower requirements, changes to the process (including its factors and allowances) since the end of optimal manning, current studies under way, and the status of the newest ship classes. We also conducted group discussions with crews from six ships, having separate discussions with officers and enlisted personnel from each ship for a total of 12 group discussions. We met with crews from two destroyers, two amphibious transport dock ships, and both variants of the littoral combat ships (LCS) to discuss crew size, composition, and workload. We selected these ship classes for their years of operational experience as well as their representation of ships subject to different reduced manning initiatives: (1) optimal manning initiative (DDG 51), (2) minimal manning construct (LCS), and (3) reduced crew size targets from their predecessor ship classes (LPD 17). Specifically, we visited ship classes homeported in both the Pacific and Atlantic Fleet, which included the USS Higgins (DDG 76), USS Bainbridge (DDG 96), LCS Crew 101, LCS Crew 203, USS Anchorage (LPD 23), and USS Arlington (LPD 24). For each visit, we requested to speak with a cross section of personnel from each ship department and carried out group discussions with the officers and enlisted personnel available. We interviewed officials or obtained documentation at the following locations: Office of the Secretary of Defense Cost Assessment and Program Evaluation Defense Manpower Data Center Office of the Chief of Naval Operations Force Manpower and Assessments Branch U.S. Fleet Forces Command Commander, Naval Surface Force, U.S. Atlantic Fleet Command Manpower Analysis Team Board of Inspection and Survey Commander, Naval Surface Force, U.S. Pacific Fleet Commander, Littoral Combat Ship Squadron One Naval Sea Systems Command Cost Engineering and Industrial Analysis Division Program Executive Office Aircraft Carriers Program Executive Office Littoral Combat Ships Program Executive Office Ships Surface Maintenance Engineering Planning Program Commander, Navy Regional Maintenance Center Naval Center for Cost Analysis Naval Education and Training Command Bureau of Naval Personnel Navy Manpower Analysis Center To determine the challenges, if any, for manning the surface fleet and implications for the future, we analyzed the Navy’s 2017 30-year Shipbuilding Plan, 2016 Force Structure Assessment, and 2017 Department of Navy budget. We also reviewed and analyzed reports on manpower and manning by the Center for Naval Analyses, Naval Audit Service, Congressional Research Service, and GAO. We also analyzed acquisition, manpower, and operational documents to determine the crew size goals and current crew sizes for new ship classes. We interviewed program and other Navy officials to discuss the status of new technologies, manning challenges, and crew sizes growth on new ships. We also interviewed Navy officials and ship crews to discuss fleet-wide manpower and manning challenges. We conducted this performance audit from March 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Navy has a decentralized process for determining its shore support manpower requirements. Each of the 20 major shore commands is the primary agent for determining and approving the scope of their activities, whether they are personnel, training, and maintenance functions or activities like research and development. The major shore commands cover multiple warfighting enterprises and providers, such as U.S. Fleet Forces Command, U.S. Pacific Fleet, Naval Sea Systems Command, and others. This process is illustrated in figure 10. The major shore command or activity writes the mission, functions, and tasks (MFT) statement, which is the primary document for identifying the shore activity’s workload. Each major shore command provides its own analysts with training to conduct manpower reviews for various activities. These analysts draw from the MFT statement to develop a performance work statement that identifies the work to be done. In determining manpower requirements, analysts also consider maintenance requirements and staffing standards, among other factors. In contrast to shipboard manpower requirements, which are determined using a model, each shore command establishes its own procedures and methodology to determine and validate manpower requirements due to the variations among the commands’ missions and workload. These procedures and methodology are to be analytically proven, such as through industrial engineering studies, mathematical models, and better business practices, among others. Major shore commands develop manpower requirements for peacetime and wartime scenarios separately, in which the workload and thus manpower requirements could vary. Major shore commands must review manpower requirements on a continuous basis to ensure they support the MFT, and should determine manpower requirements after major revisions to the MFT, new equipment changes, technology adjustments to workflow, or other changing conditions. After major shore commands determine and validate their manpower requirements, the positions are filled based on budget and resource allocation decisions. The major shore commands create a Program Objective Memorandum, which informs the service’s, department’s, and ultimately the President’s budget submission, which is subject to congressional approval. Thus, changes to manpower requirements do not result in immediate changes to shore personnel manning and, furthermore, there may be a gap between the validated requirements and the shore personnel manning due to funding and personnel inventory as established by annual defense authorization and appropriation acts. In January 2008, responding to a request from the Office of the Chief of Naval Operations (OPNAV), the Center for Naval Analyses identified challenges with the shore manpower determination process, including a lack of standardization among similar activities and issues with staff qualifications, among others, which echoed our previous findings in a 1997 report on Navy personnel. According to Navy officials, OPNAV is chairing a project team to improve the process and deliver revised direction for making shore activity plans and establishing training for determining shore manpower requirements, due in fiscal year 2017. The project team also plans to encourage major shore commands to measure workload using standard methods of analysis for similar activities. In September 2016, Fleet Forces Command launched a pilot training program for analysts of other shore commands, intended to improve the consistency in how various major shore commands conduct their manpower reviews. In addition, Fleet Forces Command is working with OPNAV to develop a model to predict changes in shore manpower needs. (CVN 78) CVN 78 ships are designed to replace Nimitz-class (CVN 68) aircraft carriers and operate with nearly 700 fewer crew members. Each ship in the new class is expected to save $3.7 billion in manpower costs and $2 billion in maintenance costs over its 50-year service life in comparison to Nimitz- class carriers. Technology and automation: Technologies and ship design initiatives are expected to reduce watchstanding workload requirements and touch-labor required for some tasks (e.g., redesigned nuclear reactor plant is expected to result in a 50 percent manning reduction). Some of these technologies are detailed in table 4. Nuclear propulsion and electric plant system) Shore support: CVN 78’s projected crew reductions do not depend on transferring maintenance or other work ashore. Program officials said that manpower reductions have been realized through the above technologies and design efficiencies or through changing work processes. Actual: CVN 78 crew size is currently 2,628, a reduction of 663 sailors below the Nimitz class, and within the threshold of 2,791. Four of CVN 78’s 13 critical technologies remain immature, testing continues to reveal issues, and ship delivery has been delayed multiple times since 2014, with expected delivery in 2017. Crew size growth: CVN 78 has not experienced crew size growth but has not yet entered active service. The preliminary manpower requirement set in 2011 and reestablished in 2017 called for 2,628 personnel, which is the current ship’s force crew size. Technology schedule delays (e.g., advance weapons elevators and dual band radar) have impacted the validation of preventive maintenance or watchstanding assumptions, or both, but are not believed by Navy officials to impact crew levels. Program officials noted that the crew size can grow by an additional 163 positions and still remain within the parameters. However, we have found that because of the lack of operational data on key systems, these ships will likely require additional personnel, and that the aircraft carrier can only accommodate a slight increase in personnel without requiring significant ship redesign. In August 2016, DOD commissioned a review to determine where system dependencies pose risk to CVN 78 performance and to identify alternatives for mitigating those risks. The study concluded that it would be too disruptive to make any technological changes at this point, but noted the ongoing risk. Operating and Support Costs: The Department of Defense (DOD) estimates that it will cost an average of about $391 million to operate and support a CVN 78 ship per year, and calculates the average annual cost for Nimitz-class aircraft carriers to be about $490 million. There are not sufficient cost data available to determine CVN 78 operating and support savings because this ship is not yet in active service. (DDG 1000) Compared to DDG 51 destroyers, which have an average crew size of 307 sailors, the DDG 1000 class is meant to require fewer crew members to carry out similar functions for watchstations, damage control, combat systems maintenance, and turbines. peacetime applications into a single network and enables reduced numbers of watchstanders. Goal: DDG 1000 has manpower key performance parameters for a crew of 125 (objective) and 175 (threshold). According to program officials, the ship’s efficient interior layout has also made it possible to reduce crew size. Actual: The ship’s current crew size is 175 (this includes a ship’s force of 147 and an aviation detachment of 28). Shore support: DDG 1000’s shore-side support and maintenance are key elements that enable a significant reduction in crew when compared with previous destroyers, but shore-based maintenance will be accomplished through existing infrastructure such as regional maintenance centers and shipyards, while the training, logistics, and administrative assistance will be accomplished using military, civilian, and contractor personnel from the program office and class squadron. Six of DDG 1000’s 11 critical technologies remain immature. The program reports that the ship’s design is stable, but ongoing development and shipboard testing of technologies pose risk for design changes. DDG 1000 was delivered to the Navy in May 2016 and was commissioned in October 2016. The performance of DDG 1000’s critical technologies and how they will be implemented in the fleet will not be known until all combat systems have been installed on the ship and it gains more operational experience. Crew size growth: The preliminary manpower requirement set in 2012 called for a crew of 158, and DDG 1000 has reached its crew size threshold of 175 personnel (crew size growth of 11 per cent). According to program officials, crew size increase was the result of lessons learned, and the crew approach was developed and updated as the ship design matured and ship construction progressed. Program officials suggested that crew size parameters should be reassessed and, if additional personnel are needed on the ship, there is space to accommodate them. Based on the Navy’s ship delivery approach, DDG 1000 entered service in 2016 without any of its combat systems tested and sufficient time has not elapsed to evaluate the impact of the design and new technologies on manpower and costs. According to program officials, due to the crew’s initial learning curve for new combat systems, ship commanders may require more crew in the short term. Operating and Support Costs: The Department of Defense estimates that it will cost an average of about $74 million to operate and support a DDG 1000 per year, and calculates the average annual cost for DDG 51 ships to be about $33 million, so there is not a cost savings expected. Since DDG 1000 has recently entered service, there are not sufficient actual cost data available to compare against the estimate. Littoral Combat Ship (LCS) The LCS class consists of two different variants that are expected to replace frigates, mine countermeasures ships, and patrol coastal ships. Each ship is to undertake one of three missions: antisubmarine warfare, surface warfare, or mine countermeasures. The use of minimal manning was meant to lower operating and support costs over the ships’ life cycle, compared to legacy frigates that had a crew size of 215. Technology and automation: The extensive use of automation and the overall design of the ship are meant to reduce manning on LCS. The class relies on a condition-based maintenance system wherein sensors and cameras remotely monitor equipment and spaces, reducing watchstanding requirements and thus crew sizes. Goal: LCS has manpower key performance parameters for a core crew of 15 (objective) and 50 (threshold). Total LCS crew size has grown from 75 in 2003 to 98 personnel in 2016, a 31 per cent increase that is detailed in table 5. Following a program review in 2016, LCS core and mission crews were integrated into a 70-sailor unit. There are 98 sailor berths within the ship; additional crew growth is not possible without redesigning interior spaces to accommodate more sailors. Actual: Recent changes to the LCS program have created an integrated core and mission crew of 70 sailors. Total number of ship crew including an aviation detachment and ensigns is 98. Core crew Mission crew (1 of 3): Antisubmarine (ASW) Surface warfare (SUW) Mine countermeasure (MCM) Operating and support costs: The Department of Defense estimates that it will cost an average of about $55 million to operate and support an LCS per year. Although the Navy has described the LCS as a low-cost alternative to other surface ship classes, we found in 2014 that the available data indicate that the costs of the LCS may exceed or closely align with the costs of other multimission surface ships with larger crews. According to Navy officials, the recent restructuring of LCS operational concepts demonstrates that reducing crew size does not generate the cost savings or cost avoidance that the Navy had anticipated. San Antonio–Class Amphibious Transport Dock (LPD 17) These ships are designed to transport Marines and their equipment and allow them to land using helicopters, landing craft, and amphibious vehicles. This class was designed to reduce crew sizes from earlier LPDs but does not rely upon new technologies, automation, or shore support as do newer ship classes. Technology and automation: Some system integration and automation was installed on the ship in part to help reduce watchstanding workload and reduce crew sizes, including a shipboard-wide network that integrates combat, navigation, and other systems. Shore support: The ship class does not rely upon shore support for maintenance or other activities more than other amphibious ships, and not to the extent of newer platforms like the Littoral Combat Ship and DDG 1000. Crew size growth: The preliminary manpower requirement set in 2003 called for a crew of 363, and the ship’s current manpower requirement is 378 personnel, a 4 percent increase. The average crew size of its antecedent ship class, LPD 4, was 364 sailors. Since the first ship of the class began construction in 2000, LPD 17 manpower requirements were adjusted through multiple iterations. The increase in the average crew size was driven by additional manning requirements related to system upgrades, and by manpower studies that identified and subsequently corrected other manning deficiencies. Crew members told us the ships do not have berthing spaces to accommodate additional Navy personnel without infringing upon designated spaces for Marines, who embark with the ship. Operating and Support Costs: Average annual LPD 17 operating and support costs are $42.8 million, compared to $36.4 million for its antecedent ship, LPD 4, an average annual per ship increase of $6.4 million. GAO reported in 2016 that the Navy plans to build a replacement class of amphibious ships based on LPD design but with no new critical technologies. The Navy considers the LPD 17 design unaffordable and plans to remove some LPD 17 features on its replacement. The program office would not comment more about the planned replacement given its competition sensitivity. As the Navy reduced crew sizes aboard its ships as part of optimal manning and related initiatives beginning in 2001, it also reduced shore support positions in units responsible for maintenance and training. However, these positions have been mostly restored as of fiscal year 2015. From a peak in fiscal year 2006, the Navy reduced military personnel in units supporting surface ships by about 1,800 full-time equivalents (or about 24 percent) by fiscal year 2011. The Navy’s Fleet Review Panel found that these reductions in shore support also contributed to degraded material condition of the surface fleet, and Navy officials told us they concluded that that the reductions in shore support contributed to declines in readiness during optimal manning. As of fiscal year 2015, the Navy has restored shore positions in units supporting surface ships to approximately their previous peak in fiscal year 2006. Within shore support, however, there is variation in the extent to which positions have been restored. For example, positions in regional maintenance centers—which are responsible for conducting and overseeing intermediate-level maintenance on Navy ships—are about 19 percent above their prior peak in fiscal year 2006. Conversely, positions in training units that support surface ships and their crews remain about 13 percent below their prior peak in fiscal year 2006. Our analysis found that overall operating and support costs increased for surface and amphibious ship classes during the optimal manning period and have continued to increase for most classes since the end of the optimal manning period. This increase was driven in part by increases in maintenance costs offsetting decreases in personnel costs over this period. In technical comments on a draft of this report, Navy officials cited growth in entitlements and allowances as an additional contributing factor to increasing personnel costs over this period. Navy officials also noted that using different deployment models such as overseas homeporting and rotational crewing can also drive significant differences in operating and support costs even within a ship class, usually with the benefits of increased time on deployment. However, as we found in 2015, these approaches can also contribute to higher maintenance costs over the long term. Specifically, we found that ships homeported overseas incur higher operating and support costs than U.S.-homeported ships, and that some of these ships have had consistently deferred maintenance that resulted in long-term degraded material condition and increased maintenance costs. Our 2015 analysis also showed that homeporting ships overseas provides additional time in a forward area of operations and additional deployed under way time compared to ships homeported in the United States, but that the additional time provided was primarily because training and maintenance periods are shorter than those provided for U.S.-homeported ships. Trends in ship operating and support costs for each ship class over this period are illustrated in figures 11 and 12. In addition to the contact named above, Suzanne Wren, Assistant Director; Steven Banovac; Kerri Eisenbach; Bonnie Ho; Joanne Landesman; Amie Lesser; Shahrzad Nikoo; Daniel Ramsey; Michael Silver; John Van Schaik; and Chris Watson made key contributions to this report.
In 2001, the Navy began reducing crew sizes on surface ships through an initiative called optimal manning, which was intended to achieve workload efficiencies and reduce personnel costs. In 2010, the Navy concluded that this initiative had adversely affected ship readiness and began restoring crew sizes on its ships. The conference report accompanying the National Defense Authorization Act for Fiscal Year 2016 included a provision that GAO review the Navy's reduced manning initiatives in the surface fleet. This report examines (1) any trends in ship operating and support costs and maintenance backlogs, (2) the extent to which the Navy's manpower requirements process accounts for ship workload, and (3) any manning challenges and implications for the future. GAO analyzed and reviewed data from fiscal years 2000 through 2015 (the most current available) on crew sizes, operating and support costs, material readiness, and the Navy's manpower requirements determination process. GAO also interviewed Department of Defense (DOD) officials and ship crews to discuss workload, manning levels, enablers of smaller crew size, and implications for the future. Total ship operating and support costs—which include personnel and maintenance costs—and maintenance backlogs increased during the optimal manning period (2003–2012) and have continued to increase for most ship classes since the initiative ended. Since the implementation of optimal manning, the Navy reduced crew sizes, which decreased the associated personnel costs for most ship classes, even as crews were partially restored. However, increased maintenance costs offset the reductions in personnel costs, as shown below. Navy officials attributed maintenance cost increases to reduced crews, longer deployments, and other factors. GAO's analysis did not isolate the relative effects of reduced crews from these other factors. Maintenance backlogs also increased during the optimal manning period and have continued to grow. The Navy's process to determine manpower requirements—the number and skill mix of sailors needed for its ships—does not fully account for all ship workload. The Navy continues to use an outdated standard workweek that may overstate the amount of sailor time available for productive work. Although the Navy has updated some of its manpower factors, its instruction does not require reassessing factors to ensure they remain valid or require measuring workload while ships are in port. Current and analytically based manpower requirements are essential to ensuring that crews can maintain readiness and prevent overwork that can affect safety, morale, and retention. Until the Navy makes needed changes to its factors and instruction used in determining manpower requirements, its ships may not have the right number and skill mix of sailors to maintain readiness and prevent overworking its sailors. Moving forward, the Navy will likely face manning challenges as it seeks to increase the size of its fleet. The fleet is projected to grow from its current 274 ships to as many as 355 ships, but the Navy has not determined how many personnel will need to be added to man those ships. In addition, as the Navy has gained experience operating its new ship classes, their crew sizes have grown and may continue to do so. Without updating its manpower factors and requirements and identifying the personnel cost implications of fleet size increases, the Navy cannot articulate its resource needs to decision makers. GAO is making four recommendations that the Navy (1) reassess the standard workweek, (2) require examination of in-port workload, (3) require reassessment of the factors used to develop manpower requirements, and (4) identify the personnel costs needed to man a larger fleet. DOD concurred with each recommendation.
Concerns about long-term national economic growth have focused attention on the federal government’s role in promoting investment necessary to sustain the economy’s capacity to maintain and improve future living standards. The federal government contributes to investment in two primary ways. First, the federal government can facilitate private investment by reducing the federal deficit. Federal budget deficits have absorbed large proportions of national savings that would otherwise have been available to finance investments, either public or private. Second, within an established fiscal policy, the federal government can change the proportion of government spending devoted to investment. In the past, federal investments in infrastructure, human capital, and R&D have played a key role in economic growth, either directly or by creating an environment conducive to private sector investment. Both the Congress and the administration are considering budgeting alternatives to decrease the annual federal deficit while increasing long-term federal investment intended to enhance private sector growth. Some discussions have focused on capital budgeting and the possible use of depreciation in the budget as a measure of the cost of federal investments which deliver benefits over a future period of time. These investments include infrastructure such as highways, bridges, and air traffic control systems; R&D, which produces new technology that leads to innovative products and processes; and investments in human capital through education and training designed to increase worker productivity. Depreciation is an integral component in capital budgeting—a proposal contained in several bills in recent years. A capital budget approach using depreciation would report total acquisition costs of the investment in a capital budget and the annual depreciation in an operating budget. The cost of the investment recorded in the operating budget would thus be spread over the estimated life of the investment. The operating budget would reflect the cost of goods and services consumed rather than purchased during the period. Under most capital budgeting proposals, the operating budget must balance while the capital budget may be financed by borrowing. By contrast, the federal budget is a unified cash-based budget which treats outlays for capital and operating activities the same. Federal debt is undertaken for general purposes of the government rather than for specific projects or activities. Three views have been cited in support of proposals to depreciate investments in the federal budget. First, the long-lived nature of the benefits arising from these investments causes some analysts to believe that their costs should also be spread over time by some method of depreciation so that costs are shared by those who will benefit in the future. Second, some analysts believe that because the initial cost of these investments is high, budgeting for the full commitment up-front discourages investment and favors consumption spending. Finally, proponents believe that budgeting for depreciation instead of the full commitment up-front frees up budgetary resources for greater investment or other uses in the current period and reduces the current year’s deficit. Other analysts, taking an opposing view, believe that depreciation would not really free up resources or reduce the deficit. Such a proposal would only redefine the deficit to be controlled as the operating budget deficit rather than the larger unified budget deficit. This would mean that any spending categorized as “capital” would not be subject to the same pressures to reduce the deficit as any other federal spending. Thus, it might be used to justify larger unified budget deficits and borrowing. In addition, they believe that appropriating annual depreciation instead of the amount of the full commitment undertaken by the government poses a loss of budgetary control that would threaten the integrity of the budget and the budget process. The objectives of this review were to determine (1) whether federal agencies are depreciating transportation infrastructure, R&D, and human capital for accounting and budgeting purposes, and if so, the methods they use, (2) whether any state, local, or foreign governments are depreciating these investments, and (3) whether depreciation of these investments could be useful in budgeting. Based on the items traditionally included in these categories, we define infrastructure as federally funded physical transportation assets, such as highways, bridges, railways, and air traffic control systems. We define R&D as federally funded activities intended to produce new or improved products or processes. For purposes of this study, we define investment in human capital as federally funded education and training programs. To meet these objectives, we discussed the concept of depreciation as a budgeting tool with professional staff at the Office of Management and Budget (OMB), the Congressional Budget Office (CBO), the Department of Commerce’s Bureau of Economic Analysis (BEA), and the Organization for Economic Cooperation and Development (OECD). We also discussed depreciation from an accounting and budgeting perspective with officials at the Departments of Education and Transportation, the National Science Foundation, the Federal Highway Administration, the Federal Aviation Administration, and the Federal Railroad Administration. We reviewed articles in budgeting and accounting professional journals on the use of depreciation in federal budgeting and accounting. We reviewed relevant standards issued by the Financial Accounting Standards Board (FASB), the Governmental Accounting Standards Board (GASB), and the International Accounting Standards Committee (IASC). We also reviewed Title 2 of the GAO’s Policy and Procedures Manual for Guidance of Federal Agencies, and standards drafted by the Federal Accounting Standards Advisory Board (FASAB) dealing with depreciation. To specifically address the second objective, we reviewed the GASB standards to determine if state and local governments are required to treat depreciation of infrastructure, R&D, and human capital for financial statement purposes. We also interviewed officials from federal agencies, OECD, and two consultants regarding the budgeting practices of foreign governments. We discussed the experience of New Zealand with these experts because of its recent adoption of accrual-based budgeting. We performed our work in Washington, D.C., between June and December 1994. Depreciation is an accepted part of accounting in business organizations. Under business accounting practices, depreciation is the allocation of the costs, less salvage value, of fixed assets, including equipment, buildings, and other structures, over their useful lives in a systematic and rational manner. It is recorded in the business’ financial statements to reflect the use of assets during specific operating periods in order to match costs with related revenues in measuring income and to determine the organization’s profit or loss, its federal tax liability, and the depreciated book value of the asset. It is also a factor in determining the cost of manufactured items and the amount of user charges appropriate for services rendered. Depreciation of assets in federal accounting is often not done because it is difficult to do and often provides little relevant information. In the past, federal accounting standards for non-business-type activities established by GAO, known as Title 2, encouraged, but did not require, depreciation of general tangible assets. However, Title 2 did require depreciation accounting for all federal business-type activities in cases where depreciation of federal assets were used to establish sales prices or user charges necessary to reimburse revolving funds or otherwise recover costs. In these cases, federal agencies do depreciate the relevant assets to determine user charges to recover the cost of the asset. Presently, FASAB is considering standards that would require federal agencies to depreciate infrastructure assets owned by the federal government, but probably not intangible investments such as R&D and human capital. GASB, which sets accounting standards for state and local governments, prohibits recording annual depreciation charges in financial statements for the general fund because these funds do not operate on a strictly accrual basis. Depreciation, which is an expense, applies only to accrual-based accounting systems. GASB standards, however, do require the reporting of depreciation in financial statements for proprietary and certain trust fund assets because these funds are reported on an accrual rather than a cash basis. If depreciation methodologies were to be used in federal budgeting, one starting point for establishing those methodologies conceivably would be the accounting methods used for depreciation of tangible assets for financial statement purposes. Depreciation in accounting can be a complex and technical subject and involves significant subjectivity concerning such key factors as the asset’s value, its useful life, and its salvage value. Because of its subjective nature, it is only an approximation of how much of an asset is used up in any period. Ultimately, depreciation of tangible assets is an imperfect way of spreading costs over the asset’s useful life. Trying to apply depreciation accounting techniques to intangible assets such as R&D and human capital investment for either accounting or budgeting purposes would be even more difficult because of the additional difficulties in estimating value, useful life, or establishing ownership. Calculating the amount of depreciation to be recorded annually depends on how assets are valued to determine the depreciation base, the depreciation method used, and the asset’s useful life. There are three general ways to value assets—historical cost, constant cost, and current cost. Historical cost is the amount of cash (or its equivalent) paid to acquire an asset and is considered to be an objective and verifiable basis for valuation. Constant cost restates historical cost information in terms of dollars of equal purchasing power. Current cost is the amount of cash or other consideration that would be required today to obtain the same asset or its equivalent. Market prices are often used to determine current cost. Which of these valuation methods is chosen greatly affects the depreciation base. While historical cost is most widely used and documented, current cost provides a more relevant measure of the resources tied up in a particular asset and the cost to replace the asset. After an asset is valued (usually at historical cost), one of numerous depreciation methods is then selected to spread the depreciation base over the asset’s useful life. Depreciation computations are based on the assumption that every fixed asset (except land) has a limited useful life. The value of the asset (or depreciation base, as described previously) is thought of as a prepaid expense that by some method must be spread over the asset’s useful life. Various methods have been developed to do this—among the most well-known are the straight-line, declining-balance, and replacement cost methods. The straight-line method is the simplest and most commonly used. Other methods that can be more complicated have been advocated or approved by accountants for income tax and other purposes. The following describes the three methods mentioned above. The straight-line method spreads the depreciation base equally over the useful life of the asset. The declining-balance or geometric method determines the annual depreciation charge by applying a fixed percentage to the diminishing value of the asset, that is, the asset’s value after deducting the preceding year’s depreciation charges. The replacement method considers the asset’s replacement cost and increases the current depreciation charge by a percentage based on a comparison of the anticipated replacement cost with the recorded cost. Selecting an appropriate depreciation method depends on the purposes for which depreciation is being recorded. In our review, we found that depreciation of transportation infrastructure, R&D, and human capital investments in the public sector was used primarily by economists for analytical purposes such as estimating economic wealth. Many economists identified the replacement method as the appropriate method for economic analysis because it provides the closest estimate of true economic cost. In general we found that none of the three types of federal investments we examined—transportation infrastructure, R&D, and human capital—are depreciated for either accounting or budgeting by federal agencies. We did find that some consideration had been given to depreciating infrastructure because physical assets are depreciated in the private sector, and its tangible nature provides a reasonable basis for discussion. However, investments for R&D and human capital had received little attention because they are not depreciated in the private sector and the intangible nature of these assets made issues of valuation and ownership difficult to determine. The Department of Transportation (DOT) administrations that we reviewed—the Federal Highway Administration, the Federal Aviation Administration, and the Federal Railroad Administration—do not depreciate transportation infrastructure investments for accounting or budgeting purposes. The reason given for this is that the federal government does not own most of the transportation assets it funds. The federal government funds most transportation infrastructure through grants. For example, the federal government spent more than $24 billion on physical transportation investments in 1993, but more than $21 billion of this spending was in the form of grants. Generally accepted accounting principles established by FASB provide that infrastructure assets owned by the reporting entity, such as railroad tracks owned by the entity, are depreciated in the entity’s financial statements. At this time, federal accounting standards for infrastructure assets not owned by the federal government do not provide for recording grantee assets for purposes of depreciation. FASAB is considering standards for infrastructure assets owned by the federal government, but not for infrastructure grants or assets owned by grantees. DOT analysts cited two major problems with depreciating assets which DOT does not own. First, it is often difficult, and in some cases impossible, to link federal grant money to the value of a specific infrastructure asset. In part, this is because it is difficult to distinguish how the federal share of funding is used when mixed with funding from other sources. It is also difficult to assign value to portions of a project that are only components of larger projects. Also, if federal investment expenditures cannot be linked directly to an asset, there is no basis for determining a useful life over which to spread the cost. A second problem cited by analysts at DOT is the difficulty of monitoring the value of an asset not owned by the entity seeking to depreciate it. The owners of an infrastructure asset can improve or discard that asset at their own discretion, although in the case of highways the federal government may share in any monetary return resulting from disposition. Applying the concept of depreciation to federal grants could result in a situation in which an annual depreciation charge would appear in the federal budget for an asset that is not owned by the federal government or that may not even exist any longer. Analysts at DOT said that the effort that would be required to determine the value of depreciable transportation assets funded by grants would be large and would detract from DOT’s other missions. Officials at these agencies expressed strong doubts that the benefits from depreciating these infrastructure investments would justify the cost of determining the assets’ value. Among the 24 OECD nations, none appropriates depreciation for infrastructure assets in its national budget. Even New Zealand, the only OECD nation that uses depreciation in its budget, does not appropriate depreciation for infrastructure assets that are owned by the government as a whole. New construction of roads and other infrastructure assets owned by the New Zealand government as a whole are appropriated up front on a cash basis. In this instance, New Zealand’s system is, in principle, similar to the system that is used to budget for highways in the United States. Infrastructure assets not owned by the New Zealand government are not depreciated by the government for either budgeting or financial reporting purposes. However, for accounting purposes, in cases where the government owns transportation infrastructure assets, the assets are depreciated using the current replacement cost method in the government’s financial statements. Officials at the National Science Foundation (NSF) told us that they do not depreciate R&D and advised us that they could imagine no reasonable method or practical reason for doing so. Major impediments to depreciation include establishing the value and useful life of R&D. Also, NSF’s R&D funds are usually disbursed through grants for which there is no established method of depreciation. Depreciation of R&D investment has been proposed and considered for the private sector, but not practiced. FASB prohibits capitalization and depreciation of any R&D expenses by private sector entities, including the R&D costs of internally developed computer software. Depreciation of R&D was rejected because of the uncertainty and difficulty in measuring the benefits and the inability to determine useful life. From an international perspective, the IASC provides that in limited cases R&D expenditures may be deferred and depreciated if they result in a product or process that is technically and commercially feasible and can be marketed. In our review, we found only one OECD government, New Zealand, that provided for depreciation of R&D to a limited extent in its budget, and then only for R&D owned by the government. In New Zealand, government R&D expenditures generally are expensed as incurred in both the budget and financial statements. However, they can be capitalized and depreciated in both if they result in a product or process which is demonstrated to be technically useful and is intended to be used or marketed. In cases where this is anticipated, depreciation is deferred until a market asset is produced. At that point, the R&D expenditures (based on historical cost) are depreciated over the expected period of future benefits, allowing for a more accurate assessment of costs for the period. Otherwise, R&D expenditures are reported as expenses for that year. We found no government that capitalizes or depreciates human capital in any budget or financial statement. At the core of this issue there is a basic unresolved question as to whether human capital depreciates or appreciates over its relevant life. Officials at the Department of Education told us they had discussed the concept of depreciating human capital, but did not find it cost beneficial or a useful tool. Similar to the DOT with its highway grants, the Department of Education funds education and training mostly through grants for which there is no standard or methodology for depreciation. In the academic literature we reviewed, there is general agreement that the problems preventing the acceptance of depreciation of human capital are insurmountable in part because of the inability to determine the useful life and real value of education and training spending. In the private sector, various methods for recognizing in financial statements the value of a firm’s employees have been developed and proposed over the last 30 years. However, no standard for reporting human capital has ever been accepted, or even seriously considered, because (1) the methods are complicated and difficult to apply and (2) the methods used to determine values for human capital are subjective and open to challenge. The methods that have been developed apply only to specific firms and are not intended to measure the value of human capital outside the firm. Thus, even if they were accepted as valid, they are not applicable to the education and training expenditures that governments would make, which are primarily for the benefit of the general public. Although federal investments in transportation infrastructure, R&D, and human capital are not depreciated for budgeting or accounting purposes, OMB and BEA depreciate infrastructure and R&D investments to make rough estimates of national wealth for analytical purposes. Depreciation is considered to be appropriate for generating national economic wealth estimates because it is used only to provide rough estimates of the value of existing assets in the economy. In these economywide analyses, the problems of determining ownership or control of assets are not relevant. However, the analysts who generate these estimates maintain that this type of analysis is inappropriate for budgeting because (1) the estimates are imprecise and dependent on questionable assumptions and (2) because measures of stocks have no place in a budget that allocates resource flows. In making national economic wealth estimates, the BEA and OMB use a valuation method called the perpetual inventory method. In this method, the gross federal investment for the year is added to the sum of previous years’ net investments. This sum is then reduced by depreciation and estimated discarded investment to determine net investment. All OECD nations use the perpetual inventory method in estimating their national wealth. BEA and OMB have both estimated the value of the stock, that is, inventories, of physical capital investments including infrastructure. In making estimates of the value of the nation’s stocks of economic wealth, BEA depreciates the estimated stock of infrastructure assets valued on historical cost, constant cost, and current cost bases using straight-line depreciation over a 50-year estimated useful life. OMB estimates the total net federally financed physical capital stock including transportation stocks, regardless of ownership. OMB made its estimates using a constant dollar adjustment to historical federal spending for transportation and depreciated it on a straight-line basis. The transportation stocks are depreciated over a 40-year estimated useful life. These estimates are produced for economic policy information. OMB has also estimated the stock of federally financed research and development. In making these estimates, OMB assumed that basic research did not depreciate but applied research and development depreciated, using the geometric method, at a 10 percent rate. BEA recently published estimates of the national R&D stocks. In making its estimates it depreciated all R&D, including basic research using a method equivalent to an 11 percent geometric rate. In the President’s 1995 budget, OMB estimated the stock of the nation’s education capital based on an estimate of what it would cost to reeducate the population at 1987 prices. They did not assume any depreciation of education over an individual’s lifetime. BEA has made no attempt to estimate the stock of human capital. We found widespread agreement among accounting experts published in professional journals, budget experts, and economists at BEA and OECD that the use of depreciation is not well suited to a cash and obligation-based budget like that of the United States. Depreciation as envisioned in most capital budgeting proposals is not currently done in the federal budget. Appropriations and outlays are normally recorded on a cash basis in the budget. Thus, in general, the total commitment of the government in making an investment is usually recorded up front, not spread over the useful life of the investment. No state records annual depreciation in its capital or operating budgets because depreciation has no effect on the flow of current financial resources. However, an important task of state capital budgets is to relate the purchase of some of a state’s fixed assets to borrowing and other specified types of financing. Business enterprises do not include depreciation of capital assets in their budgets. Businesses do, however, include a cost of capital (primarily principal and interest payments) in their financial budgets. Textbooks on private business budgeting practices indicate that depreciation is irrelevant for budgeting except where income taxes are affected. Private businesses use depreciation primarily for two purposes: (1) to match revenues with expenses in a given period for the purposes of reporting profit or loss in financial statements and (2) for tax purposes. Neither of these purposes, however, are applicable to federal budgeting, except for federal business-type activities which consider revenues and expenses in setting user fees. Of the OECD member nations only one, New Zealand, uses depreciation in its budget. New Zealand began to apply depreciation to budgeting in 1992 as a part of its transition from a cash to an accrual-based budgeting system. New Zealand’s accrual-based budgeting system includes depreciation for department or agency-owned physical assets in the budget statements where the depreciation is appropriated as part of the cost of departmental operations. However, assets owned by the government as a whole, such as transportation infrastructure and some R&D, are depreciated in the financial statements, but are not appropriated in the budget. New Zealand does not depreciate expenditures for human capital in either its financial or budget statements. In talking to budget experts, we identified four major disadvantages in the use of depreciation for federal investments in infrastructure, R&D, and human capital: (1) loss of budgetary control, (2) increasing uncertainty over budget estimates, (3) obscuring the effect of budgetary decisions on the deficit, and (4) concern with depreciating assets not owned by the federal government. The greatest disadvantage according to these experts was that depreciation would result in a loss of budgetary control under an obligation-based budgeting system. In general, the federal budget records the full cost of its spending decisions up front in terms of both budget authority and outlays so that decisionmakers have the information needed and an incentive to take the full cost of any decision into account. The only time that spending on a federal investment can be controlled is before obligations are made. After obligation, recipients of the spending expect it to occur and the government is generally committed to payment of all the costs. Depreciation, on the other hand, would spread that cost over the asset’s expected useful life. The focus of control for the operating budget—the component that would be subject to a balanced budget requirement—would not be on the total up-front government commitment because, by the time the commitment would be fully recognized in the operating budget, the expenditures would have already been made. Although decisionmakers would consider the up-front costs of an investment in the capital budget, this budgetary component would not be subject to resource constraints or balanced budget requirements, thereby diminishing the incentives to carefully weigh total costs and benefits. This loss of budget control would be evident in two ways. First, under Budget Enforcement Act provisions, investment spending would be transformed from a discretionary decision in the current year to a stream of sunk mandatory payments in future years to finance the depreciation charge. This would diminish budgetary flexibility in the discretionary portion of the budget. Second, without the establishment of some new method of control, depreciation of investments would nearly eliminate budgetary constraints on current investments. Since assets are only depreciated after they have been fully constructed and put into service, outlays for current investments would not be recognized in the operating budget until the annual depreciation charges began. For example, spending on the recently cancelled Superconducting Super Collider would not have been included in any prior year’s budget nor have been subject to any spending cap because it was never put into service. In addition, all previous spending would appear in the budget in the year it was cancelled, setting up a perverse incentive to continue the program rather than to absorb the accumulated past spending in 1 year. Depreciation could be applied to the federal budget process only if it were accompanied by new methods of control that would provide discipline for making up-front commitments that would not destroy budgetary integrity. For example, when New Zealand included depreciation in its budgetary process, it substantially reformed its budget process to include new controls on agencies. These controls included the imposition of asset caps and the establishment of output contracts which established performance goals for agency heads. A major disadvantage to using depreciation in the federal budget cited by budget experts is its effect on the quality of budget estimates. They are concerned that depreciation of investments would make budget estimates uncertain and/or unreliable. Determining any asset’s useful life is a complicated technical exercise that is inherently subjective. For example, OECD recently surveyed the useful lives over which capital equipment was depreciated in 14 OECD countries and found wide discrepancies in the average life for the same categories of assets. The range for capital equipment was 11 years in Japan to 26 years in the United Kingdom. Uncertainties about the useful lives for assets with possibly indefinite lives, such as highways, and intangible assets, such as R&D and education, would be even greater. Cash flows provide a more certain and more objective basis for making budgetary decisions. Another major disadvantage cited by budget experts is the claim that depreciation would undermine the usefulness of the budget as a fiscal policy measure. The generally cash-based federal budget deficit is currently designed to provide an indication of the level of federal borrowing. Budget decisionmakers consider, among other things, the effect of federal borrowing on the economy in general and the national credit markets in particular. If depreciation, a noncash cost allocation, is recorded in the budget in lieu of actual cash payments, budgetary decisions would no longer be connected to their impact on the government’s borrowing. We recognize that there are already departures from a cash-based budget process when the cash basis fails to recognize the government’s full commitment up front. Credit reform, for example, is a revised method, specified in the Federal Credit Reform Act of 1990, of controlling and accounting for credit programs in the budget. It requires that the full cost of credit programs over their entire lives be included in the budget up front so that the full cost is considered when making budget decisions. However, changes in the treatment of the investment spending we reviewed would do the opposite. For such spending, departing from the cash basis of budgeting by budgeting depreciation would actually spread the government’s commitment over time rather than recognizing it when it is made. Finally, budget experts mentioned the difficulty of depreciating assets that are not owned by the federal government. Many of the investment expenditures of the federal government are made in the form of grants for assets or intangibles that the federal government does not own. There is currently no provision in any accounting standard for depreciating assets that are not owned. Grants are normally accounted for as current expenditures. Despite the disadvantages cited in using depreciation for budget or resource allocation decisions, there is widespread agreement in the literature and among the budget experts and program analysts we interviewed that depreciation can be a useful analytical tool for certain other purposes. For example, information on depreciation costs may be one factor considered in making budgetary decisions by serving as a reminder that aging assets may require replacement or maintenance. Depreciation may also be used to measure the operating cost of an activity. We have previously reported that depreciation is not a practical alternative for the Congress and the administration to use in making decisions on the appropriate level of spending intended to enhance the nation’s long-term economic growth. While depreciation is used in estimating the level of the nation’s economic wealth, we believe that these estimates are not useful in determining future federal spending. However, we have reported that an investment component in the federal budget, with targets for appropriate levels of investment, could be more useful to the Congress and the President regarding decisions on future investments. Setting an investment target would require policymakers to evaluate the current levels of investment and consumption spending and would encourage a conscious decision about an appropriate overall level of investment. In our view, unlike a focus on incremental depreciation charges, this approach has the advantage of focusing budget decisionmakers on the overall level of investment supported in the budget without losing sight of the unified budget deficit’s impact on the economy. It also has the advantage of building on the current congressional budget process as the framework for making decisions. And it does not raise the budget control and other practical measurement problems posed by the use of depreciation.
GAO reviewed whether: (1) federal agencies are depreciating transportation infrastructure, research and development (R&D), and human capital investments for accounting and budgeting purposes; and (2) depreciation of these investments could be useful in federal budgeting. GAO found that: (1) the federal government generally does not depreciate transportation infrastructure, R&D, and human capital investments for accounting or budgeting purposes; (2) Congress and the Administration are considering budgeting alternatives to decrease the annual federal deficit and increase long-term federal investments; (3) budget and accounting experts do not support depreciating these investments for budgeting purposes, since it is difficult to determine the value and useful life of such investments; (4) depreciation in accounting is complex and involves such key factors as the asset's value, its useful life, and its salvage value; (5) federal agencies do not depreciate assets they do not own because it is difficult to link federal grant money to the value of a specific asset; (6) although economists depreciate infrastructure and R&D investments to generate national economic wealth estimates, the problems of determining ownership or control of assets are not relevant in these analyses; and (7) private businesses use depreciation primarily to match revenues with expenses for a given period and for tax purposes.
The Homeland Security Act of 2002 outlines DHS’s responsibilities for initiatives supporting both a homeland security and a non-homeland security mission. DHS’s homeland security mission is to prevent, reduce vulnerability to, and recover from terrorist attacks within the United States. DHS’s non-homeland security mission—also referred to as non-terrorism-related responsibilities—includes programs such as the Coast Guard’s marine safety responsibilities and the Emergency Preparedness and Response Directorate’s natural disaster response functions. GAO has previously identified strategic planning as one of the critical success factors for new organizations. As part of its transformation, we noted that DHS should engage in strategic planning through the involvement of stakeholders; assessment of internal and external environments; and an alignment of activities, core processes, and resources to support mission-related outcomes. We have reported that the mission and strategic goals of a transforming organization like DHS must become the focus of the transformation, define its culture, and serve as the vehicle for employees to unite and rally around. The mission and strategic goals must be clear to employees, customers, and stakeholders to ensure they see a direct personal connection to the transformation. Congress enacted GPRA to focus the federal government on achieving results and providing objective, results-oriented information to improve congressional decision making. Under GPRA, strategic plans are the starting point and basic underpinning for results-oriented management. GPRA requires that an agency’s strategic plan contain six key elements, as shown in table 1. In addition, GPRA requires agencies to consult with Congress and solicit the input of others as they develop these plans. The National Strategy for Homeland Security, a foundation of DHS’s strategic plan, set forth overall objectives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that may occur. The strategy sets forth a plan to improve homeland security through the cooperation of federal, state, local, and private sector organizations in an array of functions, with DHS having a prominent role in coordinating these functions. In addition, the strategy states that the United States “must carefully weigh the benefit of each homeland security endeavor and only allocate resources where the benefit of reducing risk is worth the amount of additional cost.” We have advocated a risk management approach to guide the allocation of resources and investments for improving homeland security. Specifically, a risk management approach would provide a decision support tool to help DHS establish and prioritize security program requirements, planning, and resource allocations. DHS’s own strategic planning process began in July 2003, with the creation of the Strategic Plan Development Group. The group consisted of officials from 15 separate DHS components and offices, including general counsel and directors of strategic planning from across DHS. By the fall of 2003, the group had created a draft strategic plan with goals and objectives for each component. However, according to officials involved, the group members were authorized to represent their component agencies but not to negotiate priorities in order to create departmentwide goals. Such a discussion was needed to develop a departmentwide document. Consequently, following the work of the Strategic Plan Development Group, DHS’s Deputy Secretary brought DHS senior leaders together in December 2003 to develop DHS’s vision, mission, and strategic goals and achieve senior leadership ownership of the strategic plan. DHS issued its first departmentwide strategic plan in February 2004. The plan includes DHS’s vision and mission, core values, and guiding principles. In addition, the plan describes DHS’s seven strategic goals and corresponding objectives. A summary paragraph that describes the general approaches DHS will take to achieve each objective is also included. According to several senior DHS officials, the strategic plan was the primary guidance followed for DHS’s management integration. In addition to the strategic plan, DHS officials identified four other documents as the key planning documents for the department. These documents are as follows. Fiscal Year 2005 Performance Budget Overview. This is the overview of DHS’s Congressional Budget Justification for fiscal year 2005 and serves as the overview of DHS’s fiscal year 2005 annual performance plan, in compliance with GPRA. The document describes the performance levels associated with the department’s Fiscal Year 2005 President’s Budget to Congress. For each strategic goal it includes means and strategies, as well as performance goals, measures, and targets. In addition, this document identifies the program and lead organization responsible for each performance goal. DHS’s Fiscal Year 2005-2009 Future Years Homeland Security Program (FYHSP). Developed pursuant to Section 874 of the Homeland Security Act, the fiscal year 2005-2009 FYHSP, dated May 2004, is a 5-year resource plan that outlines departmental priorities and the ramifications of program and budget decisions. The FYHSP includes a general discussion of the nation’s threats and vulnerabilities, including a description of current and future terrorist techniques and tactics; types of weapons and threats terrorists may use; and potential terrorist targets and timing of an attack. In addition, the FYHSP includes a brief discussion of the inflation factors and economic assumptions based on underlying guidance provided by OMB. The FYHSP lays out projected resource requirements through fiscal year 2009 for each strategic goal and includes a table aligning programs to the strategic goals. Finally, the FYHSP includes a description of performance priorities for each strategic goal. DHS’s 2006-2010 FYHSP was issued to Congress on March 4, 2005. It is designated “For Official Use Only,” and is thus not publicly available. DHS expects to update the FYHSP annually. DHS’s Milestones Report. The Milestones Report is an internal DHS planning document containing performance goals linked to the long-term strategic goals described in the strategic plan. For each performance goal, the Milestones Report provides annual milestones for fiscal years 2005 through 2009. In addition, the Milestones Report aligns specific programs with the strategic goals and identifies what percentage of program funding is allocated to addressing these strategic goals. DHS’s themes and owners papers. The themes and owners papers are internal planning documents that address DHS’s top seven priorities during its second year of existence, March 2004 through March 2005, as identified by the former Secretary of Homeland Security. DHS directorates were identified as the “owner,” or lead group, for addressing a “theme,” or priority, and directorate officials submitted a proposal detailing how they would address the theme in the coming year. The themes addressed are (1) stronger information sharing and infrastructure protection, (2) standards for interoperable equipment, (3) integrated border and port security systems, (4) new technologies and tools, (5) more prepared communities, (6) improved customer service for immigrants, and (7) 21st century department. DHS has made considerable progress in its planning efforts, but future efforts can be improved. While DHS’s planning documents discuss the need for stakeholder coordination during program implementation, stakeholder involvement was limited during the strategic planning process. While the strategic plan included five of the six GPRA-required elements, it did not describe the relationship of annual goals to long-term goals. However, DHS’s planning process continues to develop and mature as the department’s transformation continues. The process of developing DHS’s strategic plan and other strategic planning documents involved minimal consultation with key stakeholders, including Congress, other federal agencies, state and local governments, and the private sector. GPRA requires that agency officials solicit the input of stakeholders as they develop their strategic plans. Further, stakeholder involvement during the planning process is important to ensure DHS’s efforts and resources are aligned with other federal and nonfederal partners with shared responsibility for homeland security and that they are targeted at the highest priorities. Such involvement is also important to ensure stakeholders help identify and agree on how their daily operations and activities contribute to DHS’s mission. Additionally, DHS’s planning documents describe areas where DHS needs to coordinate with stakeholders to implement its programs, achieve its goals and objectives, and meet its homeland security and non-homeland security responsibilities. The importance of consultation to DHS was recently underscored in GAO’s High-Risk Series: An Update, in which we designated as high risk the establishment of appropriate and effective information-sharing mechanisms to improve homeland security. While this area has received increased attention, the federal government still faces formidable challenges sharing information among stakeholders in an appropriate and timely manner to minimize risk. Though DHS officials briefed congressional stakeholders on the strategic planning progress, they did not consult directly with Congress while developing the department’s mission statement or strategic goals. DHS officials said that when briefed, congressional stakeholders requested that the strategic plan include more detail, including specific performance goals and measures. However, according to DHS officials, these goals and measures were not included in order to meet OMB’s time frame for issuing the plan. To meet this time frame, DHS decided to keep the plan’s content at a high level and focus on achieving broad consensus among agency components on DHS’s mission and long-term strategic goals and objectives. Nevertheless, DHS officials acknowledged that Congress should be more involved in future planning efforts. As we previously reported, Congress needs to be considered a partner in shaping agency goals at the outset, since it is a key user of performance information and to ensure that congressional priorities are addressed in the planning documents. We have suggested that agencies consult with congressional stakeholders at least once every new Congress in order to clarify performance expectations. Further, DHS officials said they did not consult with other federal agencies responsible for shared homeland security initiatives in developing the strategic plan. We have reported that a focus on results implies that federal programs contributing to the same or similar results should be closely coordinated to ensure that goals are consistent. Stakeholder consultation in strategic planning efforts can help create a basic understanding of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. The National Strategy for Homeland Security identifies six federal agencies responsible for 43 homeland security initiatives. While DHS was identified as the agency with lead responsibility for a majority of these initiatives, there were multiple lead agencies for 12 of these initiatives. For example, DHS and the State Department share lead responsibility for the initiative “create ‘smart borders.’” As part of this initiative, the strategy states that DHS would improve information provided to consular offices so that individual applicants can be checked in databases and would require visa-issuance procedures to reflect threat assessments. These shared initiatives require that DHS look beyond its organizational boundaries and coordinate with other agencies to ensure that their efforts are aligned in order to meet consistent goals. However, to ensure that the shared initiatives have common goals, and that the goals are appropriate, consultation during the planning stage is vital. Finally, DHS had limited consultation with nonfederal stakeholders, such as state and local governments and the private sector, in its strategic planning process. Nonfederal stakeholder involvement in DHS’s strategic planning process is vital considering that state and local governments have primary responsibility as first responders for homeland security and approximately 85 percent of the nation’s critical infrastructure is privately owned. DHS officials explained that expanded involvement of nonfederal stakeholders was not practical within OMB’s time frame for completing the strategic plan. Instead, DHS provided a draft of the strategic plan to a departmental advisory group, the Homeland Security Advisory Council, for its review and comment. Further, DHS component agency planning officials said that instead of consulting directly with nonfederal stakeholders, officials from DHS components were expected to represent stakeholder views when providing their input to the strategic plan. For example, officials in DHS’s Private Sector Office were expected to represent the opinions of private sector officials based on the office’s work with private sector representatives. DHS’s strategic plan addressed five of the six GPRA-required elements, but did not include a description of the relationship between annual and long- term goals. We have reported that this linkage is critical for determining whether an agency has a clear sense of how it will assess progress toward achieving the intended results for its long-term goals. DHS and OMB officials said the decision to keep the content of the strategic plan at a high level, and not include a discussion of annual performance goals, was necessary to achieve broad consensus among agency components on DHS’s mission and long-term strategic goals. Although the Performance Budget Overview linked specific annual goals and performance measures to the long-term strategic goals, not including a description of how the annual goals relate to the long-term goals in the strategic plan makes it difficult for DHS and its stakeholders to identify how their roles and responsibilities contribute to DHS’s mission and potentially limits Congress’s and other key stakeholders’ ability to assess the feasibility of DHS’s long-term goals. OMB continues to work with DHS to develop performance measures and goals that are critical to DHS’s integrated mission and reinforce the crosscutting responsibilities of component agencies. Several of the GPRA-required elements addressed in DHS’s strategic plan could be further developed through the implementation of additional good strategic planning practices. Specifically, DHS’s plan describes long-term agencywide goals and objectives but does not include a timeline for achieving these goals. For example, the first strategic goal in DHS’s strategic plan is “Awareness: Identify and understand threats, assess vulnerabilities, determine potential impacts, and disseminate timely information to our homeland security partners and the American public.” There are four objectives related to this goal, but there is no description of when to expect results or when a goal assessment would be completed. However, the Milestones Report includes a timeline for expected results of programs that address the long-term goals, with performance measures and targets for each long-term goal through fiscal year 2009. Adding this information to the strategic plan would therefore require little additional effort and would make the plan itself a more useful document. In addition, the strategic plan generally describes strategies and approaches to achieve the long-term strategic goals but does not include the specific budgetary, human capital, or other resources needed. For example, the first objective under the second strategic goal, “Prevention,” states that DHS plans to “secure our borders against terrorists, means of terrorism, illegal drugs, and other illegal activity.” The approach to achieve this objective requires “the appropriate balance of personnel, equipment and technology.” However, the description does not include details on the specific personnel, equipment, and technology that would be needed. Although the sensitive nature of some homeland security information may limit the level of detail, including such resource-related information in the strategic plan is critical for understanding the viability of the strategies presented to achieve the long-term goals. Further, the impact of program evaluations on the development of strategic goals could be discussed in greater detail in the strategic plan. Inclusion of these components is necessary to ensure the validity and reasonableness of DHS’s goals and strategies as well as for identifying factors likely to affect performance. Evaluation can be a critical source of information for Congress and others in assessing (1) the appropriateness and reasonableness of goals; (2) the effectiveness of strategies by supplementing performance management data with impact evaluation studies; and (3) the implementation of programs, such as identifying the need for corrective action. Rather than identifying specific program evaluations and providing a schedule of evaluations, the strategic plan states only that DHS planned to (1) integrate strategy and execution; (2) assess performance, evaluate results, and report progress; (3) collaborate; and (4) refine. The plan did not include a description of the evaluations used to develop DHS’s strategic goals, nor did DHS address how future evaluations would be used to revise the goals and objectives. Finally, DHS identified some key factors that may affect its ability to achieve its strategic goals and objectives, an element required by GPRA. However, based on our prior review of agency strategic plans, this element could be further developed with an explanation of the actions DHS intends to take to mitigate these factors. For example, DHS identified the need for “international cooperation” as a key factor that can significantly affect the achievement of its goals. To make its plan more useful, DHS could include in its next update a discussion of how the department might work together with other federal agencies to help obtain international cooperation in achieving shared goals. DHS planning documents specify that DHS’s homeland security mission— which emphasizes counterterrorism efforts—is the key driver of planning and budgeting decisions. For example, the fiscal year 2005 FYHSP, DHS’s long-term resource allocation plan, states, “the Department’s overriding priority is to defend and protect the homeland from terrorism.” In addition, the DHS strategic plan states that the DHS strategic goals and objectives are directly linked to accomplishing the three objectives of the National Strategy for Homeland Security: (1) prevent terrorist attacks within the United States, (2) reduce America’s vulnerability to terrorism, and (3) minimize the damage and recover from attacks that do occur. However, these planning documents also address DHS’s non-homeland security mission in areas such as immigration services and disaster relief. For example, see the following. DHS’s strategic plan includes the following strategic goal: “Service: Serve the public effectively by facilitating lawful trade, travel, and immigration.” The focus of this goal is to improve service to those individuals immigrating to and visiting the United States. The Milestones Report includes the following performance goal: “Eliminate the application backlog by the end of FY 2006. Achieve 6 month cycle time for all applications.” This goal focuses specifically on improving the efficiency of DHS’s processing of citizenship and immigration applications. The Fiscal Year 2005 Performance Budget Overview includes the following performance measure: “international air passengers in compliance with agricultural quarantine regulations (percent compliant).” The focus of this measure is to safeguard against potentially dangerous nonnative species entering the United States. In addition, planning officials in DHS’s component agencies that address the non-homeland security mission said these responsibilities were fairly represented in the planning process and documents. They attributed this, in part, to the efforts of senior leadership. For example, prior to a strategic planning meeting in December 2003 for senior officials, senior leadership developed “straw man” mission statements that included both homeland security and non-homeland security missions. According to DHS officials responsible for planning, this was done to ensure that one role was not neglected for the sake of another and both were represented in the final mission statement. Given the enormity and importance of DHS’s transformation, having a strategic plan that outlines and defines DHS’s mission and goals is vital. While DHS has made progress in its efforts to date, improvements to its strategic planning process would help to ensure DHS’s efforts and resources are aligned with other federal and nonfederal partners with shared responsibility for homeland security. Earlier and more comprehensive stakeholder involvement in DHS’s planning process is perhaps the most important area for improvement. Consultation with stakeholders during the planning process creates a shared understanding of what needs to be achieved, resulting in more useful and transparent planning documents and helping ensure the success of stakeholder partnerships. Just as important, stakeholder consultation in strategic planning efforts can help create a basic understanding of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. Congress enacted GPRA to focus the federal government on achieving results and providing objective, results-oriented information to improve congressional decision making. While the body of DHS’s strategic planning documents address most of the required elements of GPRA, not having all of the required elements in its strategic plan limits Congress’s and other key stakeholders’ ability to assess the feasibility of DHS’s long-term goals. While DHS followed a number of good planning practices, by adopting others it could improve the strategic plan’s usefulness with little extra effort. To make DHS a more results-oriented agency and allow for public oversight and accountability, we recommend that the Secretary of Homeland Security take the following three actions. First, ensure that DHS’s next strategic planning process includes direct consultation with external stakeholders, including Congress, federal agencies, state and local governments, and the private sector. Second, we recommend that the Secretary of Homeland Security ensure that DHS’s next strategic plan—the agency’s primary public planning document—includes a description of the relationship between annual performance goals and long-term goals, as required by GPRA. Finally, we recommend that the Secretary of Homeland Security ensure that DHS’s next strategic plan further develop the GPRA-required elements addressed by adopting additional good strategic planning practices. Specifically, the Secretary should ensure that the strategic plan includes a timeline for achieving long-term goals; a description of the specific budgetary, human capital, and other resources needed to achieve those goals; a schedule of program evaluations planned; and a discussion of strategies to ameliorate the effect of any key external factors. On February 25, 2005, we provided a draft of this report to the Secretary of Homeland Security. On March 14, 2005, we received written comments from DHS that are reprinted in appendix II. In addition, we received technical comments, which we incorporated where appropriate. DHS generally agreed with our recommendations, and provided additional comments for our consideration. While DHS officials acknowledged that expanded involvement of nonfederal stakeholders was not practical within OMB’s time frame, they pointed out that they sought to consult with nonfederal stakeholders by providing a draft to the Homeland Security Advisory Council for its review and comment. We revised the draft to acknowledge this consultation. DHS officials stated that they plan to seek more interaction with nonfederal stakeholders during the next plan revision. Further, in response to our recommendation, DHS implied that its FYHSP includes information on annual performance goals and long-term goals, suggesting that this information need not be included in the strategic plan. However, the FYHSP contains information regarding the programs that support its strategic goals rather than a description of how the annual performance goals relate to the long-term goals. Moreover, we continue to believe that this information should be contained in the strategic plan—as required by GPRA—rather than in separate documents to provide a readily accessible and clear linkage of the department’s annual goals to its overall strategic goals. As we noted earlier, the FYHSP is not a public document, available only for official use, making it of limited value for accountability purposes. Additionally, DHS was concerned that our recommendation to adopt a number of good planning practices implied that it had not used good strategic planning practices. We have added language to make clear that we recognize that DHS employed a number of good planning practices and that it should adopt additional ones in the future. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Homeland Security and other interested parties. Copies will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-6543 or [email protected] or Kimberly Gianopoulos at [email protected]. Major contributors to this report included Benjamin Crawford, Chelsa Gurkin, and Amy W. Rosewarne. The objectives of this report were to assess (1) the extent to which the Department of Homeland Security’s (DHS) planning process and documents address required elements of the Government Performance and Results Act of 1993 (GPRA) and reflect good strategic planning practices and (2) whether DHS’s planning process and documents reflect attention to homeland security and non-homeland security mission responsibilities. To meet these objectives, we reviewed numerous DHS planning documents and related material and interviewed numerous DHS officials. Our review of planning materials included the Strategic Plan, Fiscal Year 2005 Performance Budget Overview, Fiscal Year 2005-2009 Future Years Homeland Security Program, Milestones Report, and themes and owners papers. In addition, we reviewed the National Strategy for Homeland Security. To meet our first objective, we relied on requirements contained in GPRA and accompanying committee report language and planning practices based on prior GAO work, guidance to agencies from the Office of Management and Budget (OMB) for developing strategic plans, and DHS internal planning guidance. We then reviewed DHS’s planning documents to identify where the GPRA-required elements could be found. To meet our second objective, we reviewed these planning documents to determine if they addressed both DHS’s homeland security and non-homeland security mission responsibilities. In addition, we interviewed officials at OMB, as well as DHS officials responsible for agencywide planning in its Office of the Deputy Secretary and Office of Program, Analysis and Evaluation. We also interviewed officials responsible for planning in DHS’s directorates and component agencies. Specifically, we met with officials in the Border and Transportation Security Directorate, the Science and Technology Directorate, the Federal Emergency Management Agency (part of the Emergency Preparedness and Response Directorate), the Coast Guard, the Secret Service, the Transportation Security Administration, the U.S. Citizenship and Immigration Services, the Private Sector Office, and the Office of State and Local Government Coordination. To meet our first objective, we interviewed officials about the process used to create the planning documents. To meet our second objective, we interviewed officials about the process for ensuring accountability for DHS’s homeland security and nonhomeland security mission responsibilities. Written comments from DHS are included in appendix II. We conducted our work from April 2004 through February 2005 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The creation of the Department of Homeland Security (DHS) was the largest government reorganization in over 50 years, involving 170,000 employees and a $40 billion budget. Given the magnitude of this effort, strategic planning is critical for DHS to ensure that it meets the nation's homeland security challenges. GAO was asked to assess the extent to which DHS's planning process and documents (1) address required elements of the Government Performance and Results Act of 1993 (GPRA) and other good strategic planning practices and (2) reflect its homeland and non-homeland security mission responsibilities. DHS has made considerable progress in its planning efforts, releasing its first strategic plan in 2004 that details its mission and strategic goals. Nevertheless, opportunities for improvement exist. The creation of DHS brought together 22 agencies to coordinate the nation's homeland security efforts and to work with Congress and numerous other organizations, including federal agencies, state and local governments, and the private sector, to further this mission. Although DHS planning documents describe programs requiring stakeholder coordination to implement, stakeholder involvement in the planning process itself was limited. Involving stakeholders in strategic planning efforts can help create an understanding of the competing demands and limited resources, and how those demands and resources require careful and continuous balancing. As DHS updates its strategic plan, earlier and more comprehensive stakeholder consultation will help ensure that DHS's efforts and resources are targeted at the highest priorities and that the planning documents are as useful as possible to DHS and its stakeholders. While DHS's strategic plan addresses five of the six GPRA-required elements, it does not describe the relationship between annual and long-term goals. This linkage is crucial for determining whether an agency has a clear sense of how it will assess progress toward achieving the intended results for its long-term goals. While DHS's strategic planning documents address most of the required elements of GPRA, not including them in the strategic plan makes it difficult for DHS and its stakeholders to identify how their roles and responsibilities contribute to DHS's mission and potentially hinders Congress's and other key stakeholders' ability to assess the feasibility of DHS's long-term goals. Additionally, several of the GPRA-required elements addressed in the strategic plan could be further developed through the adoption of additional good strategic planning practices. For example, identifying the specific budgetary, human capital, and other resources needed to achieve its goals could demonstrate the viability of the strategies and approaches presented for achieving its long-term goals. Finally, although DHS's priority is its homeland security mission--which emphasizes deterring terrorism in the United States--DHS's planning documents clearly address its responsibility for non-homeland security mission programs as well, such as its response to natural disasters. In addition, DHS planning officials said that non-homeland security responsibilities were represented in the planning process and documents due, in part, to the commitment of top leadership.
Under the Safe Drinking Water Act, EPA is responsible for regulating contaminants that may pose a public health risk and that are likely to be present in public water supplies. EPA may establish an enforceable standard—called a maximum contaminant level—that limits the amount of a contaminant that may be present in drinking water. However, if it is not economically or technically feasible to ascertain the level of a contaminant, EPA may instead establish a treatment technique to prevent known or anticipated health effects. In the case of lead, EPA established a treatment technique—including corrosion control treatment—because the agency believed that the variability of lead levels measured at the tap, even after treatment, makes it technologically infeasible to establish an enforceable standard. EPA noted that lead in drinking water occurs primarily as a byproduct of the corrosion of materials in the water distribution system or household plumbing, some of which is outside the control of the water systems. Figure 1 illustrates the distribution system for drinking water and potential sources of lead contamination. EPA’s lead rule also established a 15-parts-per-billion lead action level, which is based on the 90th percentile level of water samples taken at the tap. Water systems must sample tap water at locations that are at high risk of lead contamination, generally because they are served by lead service lines or are likely to contain lead solder in the household plumbing. The number of samples that must be collected varies depending on the size of the water system and the results of earlier testing. Small or medium-sized systems whose test results are consistently below the action level may be allowed to reduce the frequency of monitoring and the number of samples collected. To determine their test results at the 90th percentile level, water systems must multiply the number of samples taken during a monitoring period by 0.9 and identify the result at that level, after ranking the results of the individual samples they collected in ascending order. For example, a water system required to take 50 samples would rank the results from 1 (for the lowest result) to 50 (for the highest result); the 90th percentile level is the 45th result, 5 below the highest test result for that monitoring period. When the 90th percentile results for a water system are above 15 parts per billion, the system has exceeded the lead action level and must meet requirements for public education and source water treatment. Under the public education requirements, water systems must inform the public about the health effects and sources of lead contamination, along with ways to reduce exposure. Source water responsibilities include, at a minimum, water monitoring to determine if the lead contamination is from the water source rather than—or in addition to—service lines or plumbing fixtures. Water systems that exceed the action level may also be required to install corrosion control treatment, except for large systems that may qualify as having optimized corrosion control based on other criteria. When either corrosion control or source water treatment are not effective in controlling lead levels, the lead rule calls for water systems with lead service lines to begin replacing them at a rate of 7 percent annually (unless the state requires a higher rate). The states play an important role in ensuring that the lead rule is implemented and enforced at the local level. Among other things, they are responsible for (1) ensuring that water systems conduct required monitoring and (2) reporting the results to EPA. If the systems must take corrective action to address elevated lead levels, the states are responsible for approving or determining the nature of the treatment or other activities that will be required, ensuring that they are implemented, and periodically reporting relevant information to EPA. The Safe Drinking Water Act authorizes the states to assume primary responsibility for enforcing the drinking water program—including the lead rule—if they meet certain requirements, such as adopting drinking water regulations at least as stringent as EPA’s and having adequate procedures to carry out and enforce the program’s requirements. All states except Wyoming have assumed primacy for managing their drinking water programs. In addition to requiring the regulation of lead in public water supplies, the Safe Drinking Water Act also contains provisions to limit the extent to which materials in the water distribution system and household plumbing contribute to lead levels at the tap. Specifically, the act banned the use of solder and other materials in the installation or repair of public water systems or plumbing that are not lead-free. In this regard, the act established a material standard by defining “lead-free” to mean solders and flux containing no more than 0.2 percent lead, and pipes and pipe fittings containing no more than 8.0 percent lead. In addition, the act called for development of voluntary performance standards and testing protocols for the leaching of lead from new plumbing fittings and fixtures by a qualified third party certifier or, if necessary, promulgated by EPA. A third party certifier set such a standard in 1997, limiting the amount of lead that the fittings and fixtures may contribute to water to 11 parts per billion. To address the potential risks of lead contamination in water supplies serving schools and child care facilities, Congress passed the Lead Contamination Control Act of 1988. Among other things, the act banned the manufacture and sale of drinking water coolers containing lead-lined tanks and other water coolers that are not lead-free and required (1) EPA to publish a list of such coolers and distribute it to the states, (2) the Consumer Product Safety Commission to issue an order requiring manufacturers and importers to repair or replace lead-lined coolers or recall and provide a refund for them, and (3) the states to establish programs to assist local agencies in addressing potential lead contamination. In 1990, EPA identified six models of water coolers from one manufacturer that contained lead-lined tanks, but the agency was unable to obtain information on the number of units produced. Regarding water coolers that were not lead-free, EPA identified three manufacturers that produced coolers containing lead solder that could contaminate drinking water. The manufacturers reported producing at least 1 million of the coolers. Following the discovery of elevated lead levels in the District of Columbia’s drinking water, EPA undertook a year-long evaluation to gain insight into how states and local communities are implementing the lead rule and to determine whether the problems identified in the District of Columbia are occurring elsewhere. EPA’s activities included a series of expert workshops on key aspects of the rule (monitoring protocols, simultaneous compliance, lead service line replacement, public education, and lead in plumbing fittings and fixtures), a review of state policies and practices for implementing the lead rule, data verification audits that covered the collection and reporting of compliance data for the lead rule in 10 states, and an expert workshop and a review of state efforts to monitor for lead in drinking water at schools and child care facilities. Participants in EPA’s expert workshops included representatives of federal and state regulatory agencies, drinking water systems, researchers, public interest groups, and others. Although EPA’s data on the results of testing indicate that the lead rule has largely been successful in reducing lead levels, the reporting of these data has not been timely or complete. In addition, key data on the status of water systems’ efforts to implement the lead rule, including required corrective actions, are incomplete. EPA’s data on lead rule violations are also questionable because of potential underreporting by the states. The lack of data on key elements of lead rule implementation makes it difficult for EPA and others to gauge the effectiveness of efforts to meet and enforce the rule’s requirements. When the lead rule was first implemented, initial monitoring disclosed that several thousand water systems had elevated lead levels—that is, more than 10 percent of the samples taken at these systems exceeded the 15- parts-per-billion action level. EPA’s most recent data indicate that the number of water systems that exceed the lead action level has declined by nearly 75 percent since the early 1990s. The systems that currently have a problem with elevated lead levels represent about 2 percent of all water systems and serve approximately 4.6 million people. Figure 3 shows the results (by system size) of the initial lead monitoring, conducted from 1992 to 1994, and more recent testing from 2002 through the quarter ending in June 2005. EPA, state, and water industry officials generally see the decline in the number of systems with elevated lead levels as evidence that the lead rule has been effective and point to corrosion control treatment as the primary reason. Another indicator of success is the number of water systems approved for reduced monitoring. Under the lead rule, water systems can obtain state approval to reduce both the frequency of monitoring and the number of samples included in the testing when test results show lead levels consistently below the action level. According to EPA’s data, nearly 90 percent of all water systems have qualified for reduced monitoring. After several years of experience with the lead rule, in January 2000, EPA made significant changes to the information states were required to report for inclusion in the agency’s database. Among other things, EPA added a requirement for states to report, for large and medium-sized systems, all 90th percentile test results, not just the results for systems that exceed the action level. EPA said that it planned to use these test results to show how levels of lead at the tap have changed over time for large and medium systems and, by extrapolation, for small systems. Although the new reporting requirements took effect in January 2002, EPA’s database contained 90th percentile test results for only 23 percent of the large and medium systems by January 2004. EPA officials explained that states were still having difficulty updating their information systems to accommodate the new reporting requirements and, for EPA, obtaining the data was not a priority at that time. Following the detection of elevated lead levels in the District of Columbia, however, EPA made a concerted effort to obtain more complete information from the states, and, as of June 2004, EPA reported that it had data for nearly 89 percent of the large and medium systems (based on an analysis of test results submitted from January 2000 through May 2004). However, we also analyzed data on the results of lead testing and found that EPA’s database does not contain current information for a much larger percentage of large and medium water systems. Specifically, we found that for the period from January 2002 through June 2005, EPA’s database lacks any test results for nearly 31 percent of the large and medium water systems. We could not determine whether the data are missing because states have not reported the results or because testing has not occurred. When asked whether states have been updating test results in a timely manner since 2004, an official representing EPA said that the timeliness of recent test data is unknown; the agency has not been tracking whether states are adequately maintaining data on the results of lead testing. Regarding the information required for small water systems—which is limited to test results exceeding the action level—officials from both the Office of Ground Water and Drinking Water and the Office of Enforcement and Compliance Assurance indicated that some data are probably missing but could not provide specific estimates. An official from the Office of Ground Water and Drinking Water commented that EPA’s database likely includes most of the required small system data because action level exceedances trigger follow-up activities and states are more likely to pay attention to those cases. As part of EPA’s efforts to improve its indicators of lead rule implementation, the agency restructured its reporting requirements and reduced the number of “milestones” that states are required to report from 11 to 3. EPA established three corrective action milestones, including (1) a DEEM milestone, meaning that the system is deemed to have optimized corrosion control; (2) an LSLR milestone, meaning that the system is required to begin replacing its lead service lines; and (3) a DONE milestone, meaning that the system has completed all applicable requirements for corrosion control, source water treatment, and lead service line replacement. EPA officials told us that the vast majority of water systems should have at least one milestone in the database. They indicated that in most instances, systems should have a DEEM designation because they have installed corrosion control or qualify for meeting the milestone otherwise. However, we found that, overall, EPA has information on corrective action milestones for only 28 percent of the community water systems nationwide—and lacks any milestone data on the remaining 72 percent. Table 1 summarizes the results of our analysis. The extent to which milestone data were reported to EPA varied from state to state. We found that 22 states had not reported milestones for any of their water systems and another 8 states had reported data on about 10 percent of their systems. (See app. II for a state-by-state breakdown of reported milestone data.) EPA officials believe that most water systems have actually taken the steps necessary to meet the criteria for the DEEM milestone, at a minimum, and attribute the lack of milestone data to non-reporting by the states rather than noncompliance by the water systems. They also suggested that some of the 22 states we identified as having reported no milestone data, based on our analysis of EPA’s current data, may have reported corrective actions prior to 2000, when EPA modified the number and type of milestones. However, we reviewed archived data in EPA’s database and found that 8 of the 22 states had also not reported any milestones prior to 2000, and another 11 states had reported data on no more than 10 percent of their systems. Overall, the 50 states had reported milestone data for only 5.7 percent of their community water systems prior to 2000. Moreover, some information in EPA’s database is inconsistent with other reported data. Specifically, we found differences between the information on lead service line replacement in EPA’s database—systems having an LSLR milestone—and the information states reported in the agency’s 50- state review of lead rule implementation policies and practices. As table 2 shows, seven states reported requiring lead service line replacement in response to EPA’s June 2004 query but did not have any LSLR milestones in EPA’s database in the same time frame. In addition, after following up with state officials, we found that EPA’s database did not contain accurate data on the number of water systems required to replace lead service lines because the states were not providing timely updates or correcting erroneous information. Periodic audits by EPA—and our own analyses—raise questions about the completeness of EPA’s data on lead rule violations. To assess the reliability of its drinking water data, EPA regularly conducts data verification audits that evaluate state compliance decisions and the adequacy of states’ reporting to the national database. In addition, EPA prepares a national summary evaluation of the reliability of drinking water data every 3 years. While past data verification audits have not assessed compliance decisions under the lead rule, to the extent that states’ reporting practices are relatively consistent across regulations, the audits may shed some light on the types of problems likely to be found in the reporting of lead rule data. According to the most recent national summary of data reliability, which covered audits conducted from 1999 to 2001, the estimated error rate for health-based violations—involving maximum contaminant level or treatment technique requirements—was 35 percent, down from 60 percent in the prior national report, which covered audits conducted from 1996 to 1998. For monitoring and reporting violations, the estimated error rate was 77 percent, down from 91 percent in the prior report. The March 2004 report said that most violation errors resulted from incorrect compliance determinations by the states, meaning that the state should have cited a violation but did not. Other problems included “data flow” errors (when the state correctly identified a violation but did not report it to EPA) and errors in EPA’s database (such as violations that were incorrectly reported or not removed when rescinded). Another analysis from EPA’s March 2004 report did include the lead rule and the results also raise questions about the completeness of EPA’s data on lead rule violations. The report states that by means of a tool that tracks the number of violations reported in each state over a period of several years, EPA determined that 14 states had not reported any treatment technique violations under the lead rule during a 6-year period from 1997 to 2002. The report noted that this potential non-reporting should be evaluated further and recommended that EPA and the states conduct annual evaluations of all instances of potential non-reporting. EPA’s Office of Ground Water and Drinking Water asked the regional offices to follow up with the states regarding the potential underreporting, as recommended in the March 2004 report on data reliability. For the most part, however, the regions’ responses did not address the lack of treatment technique violations under the lead rule in the applicable states; two of the regional offices did not provide written responses. Officials from EPA’s Office of Enforcement and Compliance Assurance were not aware of the violations analysis. The officials told us that because of limited resources, they focus their efforts on helping to ensure that states address the worst compliance problems—water systems identified as significant noncompliers as a result of the frequency or severity of their violations. A lack of violations—or a relatively low number of water systems with violations—does not necessarily mean that states are not meeting reporting requirements, or that their compliance monitoring and enforcement efforts are inadequate. However, analyzing the violations data and following up on the results could provide some useful insights into the reasons for differences among the states; it could also help identify problem areas and best practices. We updated EPA’s analysis of violations and, as table 3 shows, the percentage of water systems that have had one or more violations over the past 10 years varies from state to state, particularly in the case of monitoring violations. Appendix III contains a state-by-state analysis of lead rule violations reported from 1995 to June 2005. More recently, EPA conducted data verification audits during the fall of 2004, which focused exclusively on states’ compliance determinations under the lead rule in five states and included the lead rule as part of the audit in another five states. However, the results are not yet available. EPA officials have been analyzing the data and obtaining comments on the preliminary findings from the states; they expect to issue a final report by the end of calendar year 2005. In changing its reporting requirements in January 2000, EPA recognized that it needed better indicators of the lead rule’s implementation. Regarding the 90th percentile results of lead monitoring, EPA noted that in terms of routine reporting, these data are the only measure it has for showing the lead rule’s effectiveness and said that, without such data, the agency would have no way to measure progress. Similarly, EPA maintained that having information on water systems’ corrective action milestones, along with quarterly violation and follow-up information, would provide data on the status of lead rule implementation and allow the targeting of compliance and enforcement activities. Given the reduced number of milestones, EPA indicated that it would be critical for states to report the information completely and in a timely manner, and that the agency would be following up with the states to ensure that such reporting was occurring. Despite the importance of the 90th percentile results and corrective action milestones to evaluating the lead rule’s implementation, our analyses confirmed or identified significant and longstanding gaps in the amount of information available. Although EPA attempted to ensure that it had complete data on the results of lead testing, following the publicity surrounding the incidence of lead contamination in the District of Columbia, the problems with incomplete test result data have continued and the agency has not followed up on the missing milestone data. EPA has also been slow to take action on the potential underreporting of violations. As noted earlier, following its March 2004 report on data reliability, EPA did not determine the reasons for the lack of violations reported by some states. EPA’s previous summary evaluation, which was issued in October 2000, identified similar indications of underreporting and called for targeted attention to the applicable states and regions to address the issues and develop action plans. EPA needs complete, accurate, and timely data to monitor water systems’ progress in implementing the lead rule, identify potential problem areas and best practices, and take appropriate action. In particular, not having complete or reliable data on corrective action milestones or violations makes it difficult to assess the adequacy of EPA and state enforcement efforts. However, officials from EPA’s Office of Enforcement and Compliance Assurance told us that the amount of enforcement resources devoted to the drinking water program—including enforcement of the lead rule—has declined in recent years. They also told us that while they hold monthly meetings with their counterparts in EPA’s regional offices and state officials to discuss the more significant violators, the officials have not systematically evaluated state enforcement efforts with regard to the lead rule. See appendix IV for information on EPA and state enforcement actions, by type, from 1995 to June 2005. EPA and state officials attribute the problems with lead rule data to the complicated nature of the rule, the incompatibility of EPA and state information management systems, and resource constraints. For example, EPA officials noted that it is difficult to ensure that the database contains complete information—and includes data on every system that is required to test for lead in a particular period—because the frequency of required testing can vary depending on whether a system has qualified for reduced monitoring (and maintains that status in future periods). The same circumstances also make it difficult to develop trend data. EPA and state officials indicated that the January 2000 minor revisions to the lead rule, which made significant changes in states’ reporting requirements, exacerbated existing problems with the transfer of accurate and timely data from the states to EPA. For that and other reasons, modifying the states’ data systems to incorporate the new reporting milestones has been delayed. In addition to problems with the structure of the information systems—and technical problems in actually transferring data from the states to EPA—EPA and state officials acknowledge that reporting water systems’ milestone data has been a low priority. The officials explained that since January 2004, states have been focusing their limited resources on reporting the 90th percentile test results for large and medium water systems. EPA and the Association of State Drinking Water Administrators have been working on a Safe Drinking Water Information System modernization effort that should address at least some of current data problems, according to EPA officials. Among other things, the modernization will make it easier to transfer data between states and EPA so EPA’s data will be more timely. To improve the accuracy of the data, EPA’s system will have a component designed to validate state data before it is entered into the federal database. As of October 2005, EPA had completed the transition to its modernized system for the entry of new data. Based on their experiences in implementing the lead rule, EPA, state, and water system officials have identified six aspects of the rule for which oversight could be improved or the requirements modified to increase public health protection. Specifically, their experiences indicate that (1) the sampling sites used for lead testing may no longer reflect areas of highest risk, (2) reduced monitoring may not be appropriate in some instances, (3) the homeowners who participate in tap monitoring may not be informed of the test results, (4) controls over when and how treatment changes are implemented may not be adequate, (5) data on the effectiveness of lead service line replacement programs are limited, and (6) states vary in how they apply the lead rule when water systems sell drinking water to other systems. In addition, some of the officials responsible for implementing the lead rule and other drinking water experts believe that existing standards for plumbing fixtures may be outdated. EPA is considering modifications to the lead rule that will address some of the problems we identified. Under the lead rule, water systems must select sampling sites that are considered to be at high risk for contamination. The rule defines Tier 1 sites as single-family structures served by lead service lines, and/or containing lead pipes (or copper pipes with lead solder installed after 1982). According to participants in EPA’s workshop on monitoring protocols and state officials we interviewed, one problem is that EPA has never updated its site selection criteria and at least one of the criteria is outdated. Specifically, enough time has elapsed so that lead solder in plumbing installed from 1983 to 1986 is no longer “fresh” (lead solder was banned in 1986). Experts believe that, by now, solder from that period has been coated by a naturally occurring film that prevents lead leaching. Moving the sampling sites to other Tier 1 locations—for example, homes served by lead service lines—could be problematic. In the preamble to the lead rule, issued in 1991, EPA cited a survey by the American Water Works Association which estimated that only about 20 percent of the nation’s community water systems have lead service lines. Moreover, although the lead rule required water systems to do a “materials evaluation” to identify an adequate pool of high risk sampling sites, according to EPA the evaluation did not assess pipe materials system-wide, and many systems do not have a complete inventory of their service lines. A related problem is that sampling locations have likely changed over time as sites are no longer available or appropriate, and states may not have procedures in place to ensure that these locations continue to represent the highest risk sites. In this regard, EPA requested information from the states on how they “ensure that site locations were correctly followed during system sampling rounds.” As table 4 shows, a significant number of states may not be tracking changes in water systems’ sampling locations. Another uncertainty is whether systems that are on reduced monitoring— and have been allowed to reduce the number of samples they collect—are taking samples from locations that represent the highest risk sites based on previous testing. According to the lead rule, these water systems must take their samples from sites included in the pool of high risk sampling sites identified initially. Although the systems have some indication of which sites within the pool have historically tested at higher or lower lead levels, the rule is silent on how sites within the pool are to be selected for reduced monitoring, except that they must be “representative” of the sites required for standard monitoring. In addition, the rule provides that states may specify the sampling locations. EPA requested information from the states on what role they play in selecting the sites used for reduced monitoring. We analyzed the states’ responses and found that, in most instances, the states’ role is limited; table 5 summarizes the results of our analysis. According to EPA’s lead rule, small and medium-sized water systems whose test results are consistently at or below the action level may reduce the frequency of monitoring from once every 6 months to annually and, if acceptable results continue, to once every 3 years. In addition, systems of any size that operate within water quality control parameters reflecting optimal corrosion control treatment, as specified by the state, may reduce the frequency of monitoring under the same schedule. The rule also lays out conditions under which water systems must return to standard monitoring—for example, small and medium-sized systems that have exceeded the action level. In addition, states have the flexibility to require systems to resume standard monitoring if the state deems it to be appropriate. We analyzed EPA’s compliance data and found some instances that raise questions about the states’ decisions to allow reduced monitoring. Specifically, we found that 49 large and medium water systems were exceeding the 15-parts-per-billion action level and appeared to be on reduced monitoring schedules. In addition, our analysis indicates that 104 large and medium systems with lead levels of 13-15 parts per billion also appear to be on reduced monitoring schedules. Although this is allowable under EPA’s regulations, according to some state officials, systems with lead levels just below the action level should be subject to closer scrutiny and, thus, may not be good candidates for reduced monitoring. To determine how states exercised their discretion with regard to monitoring frequency, we reviewed their responses to EPA’s information request, which asked the states to describe how they determine if reduced monitoring is appropriate. According to their responses, the states by and large adhere to the requirements of the lead rule and allow reduced monitoring whenever a water system’s test results are at or below the action level in consecutive monitoring periods. Specifically, 40 states reported that they follow the federal regulation, 6 states indicated that they may be using some additional criteria for their reduced monitoring determinations, and 4 states did not answer or provided information that was nonresponsive. EPA did not ask for the states’ views on whether reduced monitoring is appropriate when a water system’s test results are at or just below the action level or on circumstances in which states might determine that previously approved reduced monitoring is no longer appropriate—and the states did not volunteer such information. None of the states reported using other criteria, such as test results that are at or just below the action level, to delay or rescind approval for reduced monitoring. A key issue is whether water systems should be required to resume standard monitoring following a major treatment change so that the potential effects of the change can be evaluated. Given the circumstances in which lead contamination became a problem in the District of Columbia, when a change in the system’s disinfection treatment impaired the effectiveness of corrosion control, such decisions can be critical. In its information request on state implementation policies and practices, EPA asked the states whether they had ever required a system to conduct more frequent monitoring to evaluate the potential effects of a treatment change. It would have been useful to know more about the states’ policies and practices in this regard, including how often the states required additional monitoring and the criteria they used in making such determinations. However, EPA’s question was limited in scope and, as table 6 shows, the states often did not elaborate. In our discussions with 10 states, we found a variety of policies and practices regarding reduced monitoring. For example, officials from California and New York told us that they do not approve reduced monitoring—or are reluctant to do so—when water systems’ test results are close to the lead action level. On the other hand, Connecticut and Massachusetts officials indicated that they have systems that are on reduced monitoring despite test results close to the action level. Several other states indicated that, in the case of large water systems, approval for reduced monitoring is linked to whether the systems are meeting their water quality parameters—not the results of lead monitoring. On the issue of monitoring following a major treatment change, some participants at EPA’s monitoring workshop stated that standard compliance monitoring does not adequately evaluate the impact of treatment changes and that monitoring immediately after major changes should be required. Several of the states we contacted also favor increased monitoring under these circumstances; Florida and New York, for example, require systems to return to semi-annual monitoring following a treatment change. Pennsylvania officials agree that the state and water system should revisit the treatment approach when monitoring results indicate that a treatment change is affecting water chemistry. However, the officials acknowledged that they may not find out about the impact of treatment changes in a timely manner when water systems are on a triennial monitoring schedule. According to EPA’s information request on state implementation policies and practices, only two states require their water systems to notify homeowners of the results of lead testing—Texas (only when results exceed the action level) and Wisconsin. At least 17 other states indicated that notification may be occurring voluntarily to varying degrees. Table 7 summarizes the results of our analysis. In some instances, changes to other treatment processes can make corrosion control less effective. According to EPA, state, and industry officials, one of the biggest challenges in implementing the lead rule is achieving “simultaneous compliance” with other rules, including, in particular, rules related to total coliform bacteria, surface water treatment, and disinfection by-products. Changing the type of disinfectant a system uses to control bacteria, for example, can impair the effectiveness of a system’s corrosion control treatment to prevent lead contamination. Among other things, states assuming primary enforcement responsibility must have a process for ensuring that the design and construction of new or substantially modified water system facilities will be capable of meeting drinking water regulations, including the lead rule. In addition, in its minor revisions to the lead rule, EPA added a requirement that certain water systems must notify the state no later than 60 days after making a change in water treatment. However, the responses to EPA’s information request raise questions about the nature and extent of states’ reviews of treatment changes. On the one hand, 31 states indicated that they had some type of proactive process to review or evaluate treatment changes, before or after the treatment was installed, including 15 states that reported requiring some or all of the affected water systems to provide information on the potential effects of treatment changes on corrosion control. On the other hand, it appears that in at least 15 states, the plan review process may be limited, or the states may not be receiving notifications from all their water systems. For example, some states indicated that their review process only covers changes to a system’s physical infrastructure—or specifically excludes changes in the chemicals used in a process. Other states reported that they are not learning of some treatment changes until they conduct comprehensive inspections of the water systems, or that small systems in particular are not notifying the state when they change their treatment processes. Some of the participants in EPA’s May 2004 workshop on simultaneous compliance cited a need for additional regulations or guidance to help ensure that the effectiveness of corrosion control is maintained when water systems make changes to other treatment processes. For example, some participants suggested that the lead rule should better define or even specify the types of treatment changes that (1) should be reported to the state and (2) trigger additional monitoring or analysis. Along those lines, Washington state officials told us that certain changes, such as switching the disinfectant from chlorine to chloramines or making adjustments that affect the water’s pH or alkalinity, may warrant closer review because of the potential impact on corrosion control. The officials also noted that additional guidance from EPA on these matters would be helpful. Others believe that small water systems, in particular, need more guidance on the potential effects of various treatment changes, and that operator certification and training programs should be updated to address these topics. Under the lead rule, drinking water systems may be required to replace lead service lines if test results exceed the action level after installing corrosion control and/or source water treatment. Some of the participants in an EPA workshop on lead service line replacement and state officials we contacted raised questions about the effectiveness of replacement programs, in part because such programs often result in partial replacement only. Water systems are responsible for replacing only the portion of the service lines they own. While residential customers may, at their option, pay the cost of replacing the rest of the service line—typically, the portion running from the curb stop or property line to the household plumbing system—some evidence suggests that customer participation in such programs is generally low. According to workshop participants, little conclusive information is available on the extent to which removing lead service lines lowers lead levels at the tap. In a survey of water systems conducted for the American Water Works Association, 18 of 27 respondents indicated that lead service lines were not responsible for the highest levels of lead in drinking water, and 20 of 29 respondents reported no observed linkage between lead service lines and lead levels in drinking water. However, the survey did not include information on test results before and after replacement of lead service lines. The American Water Works Association Research Foundation is sponsoring a study of the relative contributions of service lines and plumbing fixtures to lead levels at the tap; the projected completion is fall 2008. The limited data on the extent and results of lead service line replacement programs make it difficult to draw conclusions about the programs’ effectiveness or the need for additional regulations or guidance. As noted earlier, EPA’s data on corrective action milestones—including the LSLR milestone—are incomplete. Moreover, few states reported requiring systems to replace lead service lines in response to EPA’s information request on state implementation policies and practices. Specifically, when asked if they have any systems that have been required to do lead service line replacement, five states answered “yes” without elaborating and seven states reported a total of 27 water systems that are (or were) replacing lead lines. In addition, although the lead rule requires testing following partial service line replacement, it appears that neither the states nor EPA are collecting and analyzing these test results. EPA asked states to describe the process they use to ensure that water systems are following the requirements for lead service line replacement. Among other things, the lead rule requires systems to collect samples within 72 hours following partial replacement and to notify homeowners and occupants of the results. States may waive the requirement that these test results also be provided to the states. Of the 12 states that reported requiring one or more water systems to replace lead service lines, only one indicated that its water systems might be required to report the results of service line testing to the state. Some of the officials we contacted raised concerns about whether the benefits of replacement are enough to justify what can be a significant investment. For example, Iowa drinking water officials commented that partial replacement is not a good use of resources because it disturbs the line, releasing lead particulate matter into the water, and still leaves half the lead line in place. In addition, officials from the Syracuse Water Department told us that they are planning to replace lead service lines at a cost of $5.3 million, although they are skeptical that the effort will significantly reduce lead levels, citing the age of the housing stock and lead contributions from internal residential plumbing. The officials attribute the city’s problem with elevated lead levels to a simultaneous compliance issue. Specifically, adding a phosphate-based corrosion inhibitor to further reduce the corrosiveness of the drinking water solves one problem but creates another: excessive phosphates in the system’s discharges to a local lake. Participants at EPA’s workshop on lead service line replacement and some of the state and water industry officials we contacted suggested measures to help ensure that water systems maximize the potential benefits of replacement efforts. For example, some workshop participants called for EPA guidance on strategies to encourage full service line replacement and motivate customers to have their portion of the line removed. Such strategies might include subsidizing a portion of the replacement cost, offering low interest loans or property tax relief, requiring disclosure of lead service lines in property sales, or providing more information on the health effects of exposure to lead in drinking water. Others suggested that prioritizing the replacement of lead service lines would help ensure that replacement activities focus on the populations most at risk from exposure to elevated lead levels. Some utilities are already prioritizing service line replacement using criteria such as locations with vulnerable populations, including schools and child care facilities, locations where test results have exceeded the action level, and lines serving 20 or more people in an 8-hour day. We found some differences among the states in how interconnected water systems—generally comprising a system that sells drinking water along with one or more systems that buy the water—are required to monitor for lead and report the results. According to EPA’s proposed definitions, these interconnected water systems are known as “combined water distribution systems.” The variations in state implementation practices create differences in the level of public information and, potentially, public health protection. Combined distribution systems account for a large and growing share of the nation’s community water systems so differences in how they implement the lead rule could have broad implications for public health protection. Overall, EPA estimates that there are currently about 2,800 combined distribution systems that encompass about 13,900 individual systems, likely accounting for a significant share of all community water systems. Under EPA regulations that establish general requirements for drinking water monitoring, states may modify the monitoring requirements imposed on combined distribution systems—typically by reducing the number of samples required within the combined system—“to the extent that the interconnection of the systems justifies treating them as a single system for monitoring purposes.” However, in the case of the lead rule, EPA strongly discouraged such modifications, commenting that they would not be appropriate because the primary source of elevated lead levels at the tap is materials within the distribution system. At least four of the states we contacted—Massachusetts, Michigan, Oregon, and Washington—approved modified sampling arrangements at combined distribution systems. For example, the Massachusetts Water Resources Authority, which supplies all of the drinking water for 30 communities, currently takes lead samples at 440 locations under its modified sampling arrangement—significantly fewer than the 1,720 samples that would be required if each of the consecutive systems tested for lead individually. On the other hand, if the combined distribution system represented a single water system, only 100 samples would be required. EPA does not have comprehensive information on the extent to which states are approving modified sampling arrangements at combined distribution systems—or the reporting practices used by such systems. As table 8 shows, we found differences in how combined distribution systems calculated and reported their 90th percentile test results. Not only do the reporting practices approved by the states affect the amount of information available to the public—they can also have implications for the corrective actions that are taken to reduce lead levels. For example, reporting one overall result for lead testing can be misleading if the 90th percentile levels at individual consecutive systems would have exceeded the action level. In the case of the Massachusetts Water Resources Authority, although EPA’s database contains the overall result for the combined system, authority officials calculated the 90th percentile results for each of the consecutive systems and determined that lead concentrations at some of them exceeded the action level. State officials in Massachusetts told us that until recently, none of the consecutive systems whose individual test results exceeded the action level were required to meet public notification or public education requirements or to replace lead service lines—as long as the result for the combined system met the action level. Although EPA regional officials concurred with such arrangements when they were first established, EPA is now considering how to ensure that the lead rule requirements will be applied to each community within a combined distribution system. Based on discussions with EPA regional officials, Massachusetts has already changed its policy and will be revisiting agreements with combined distribution systems. The standards applicable to plumbing products are important to utility managers who are responsible for ensuring the quality of water at the tap but have little control over household plumbing. However, existing standards may not be protective enough, according to some experts, because testing has determined that some of the products defined as “lead- free” under the Safe Drinking Water Act can still contribute high levels of lead to drinking water. For example, although the act prohibits the use of solder or other plumbing materials in the installation or repair of any public water system if it is not lead-free, lead-free is defined to include materials that contain small amounts of lead. That is, solders and flux may contain up to 0.2 percent lead, pipes and pipe fittings may contain up to 8 percent lead. In addition, plumbing fittings and fixtures may leach lead up to 11 parts per billion into drinking water and still be deemed lead-free, according to voluntary standards established by an independent organization and accepted by EPA. NSF International (NSF)—a not-for-profit, non-governmental organization involved in standards development and product certification—established the standard in 1997. NSF used a voluntary consensus process that included representatives from regulatory agencies, industry, water suppliers, consultants, and other users of the products governed by the standard. One problem with the current regulatory framework is that certain devices used in or near residential plumbing systems are not covered by all standards for lead-free plumbing. Table 9 shows how the standards governing lead content and lead leaching apply to specific categories of products. Some of the products that are not covered by the voluntary leaching standard have been found to contribute high levels of lead to drinking water during testing. For example, tests conducted by NSF indicate that certain meters and valves may contribute high levels of lead to drinking water. At our request, NSF compiled test results for a nonprobability sample of water meters and valves that had been submitted for evaluation. While all of the products in the sample were well below the 8 percent limit on lead content, the test results showed that the amount of lead leached from the selected water meters ranged from 0.4 parts per billion up to 39 parts per billion and, in the case of valves, ranged from a low of 4.1 parts per billion to as much as 530 parts per billion. An NSF official commented that although these products are representative of what is submitted to NSF for testing, they are probably not representative of what is available in the marketplace because some manufacturers have two product lines—a low-lead line for buyers who specify products that meet NSF Standard 61 and a higher-leaded line for other buyers. Another issue is that NSF’s testing protocol for lead leaching may not accurately reflect actual conditions and may need to be modified. One recent study identified several aspects of NSF’s testing protocol that should be reevaluated, including, for example, the chemistry of the water in which tests are conducted. After demonstrating that potentially unsafe devices could pass NSF’s test, the study concluded that the protocol “lacks the rigor necessary to prevent installation of devices that pose an obvious public health hazard.” NSF officials told us that they are aware of the concerns and have already made some clarifications and changes to the protocol. NSF has also established a task force, the Drinking Water Additives Joint Committee, which will be reviewing the protectiveness of NSF Standard 61 and related testing. Representatives of NSF, water utilities, and researchers also took issue with the standard for lead content, noting that it has not been updated to reflect current manufacturing capabilities and practices. According to the American Water Works Association, manufacturing technology in the plumbing industry has improved since the lead-free definition was established nearly 20 years ago, and today’s plumbing products contain less lead as a result. Data on the lead content of plumbing products voluntarily submitted to NSF for evaluation, shown in table 10, suggest that manufacturers can produce products with lead levels well below the 8 percent standard. According to NSF, the extent to which lead leaches from products containing lead is not directly proportional to the level of lead used in any one alloy contained in the product. NSF identified several factors that contribute to the level of leaching, including the corrosiveness of the water, lead content, the extent of the leaded surface area, and the process used to manufacture the product. However, the state regulators, water industry representatives, and other experts we interviewed generally agreed that lowering the existing standard for lead content is feasible and would provide an extra margin of safety. Both the Copper Development Association and the Plumbing Manufacturers Institute acknowledged that most plumbing products are below the 8 percent limit on lead content but prefer that plumbing standards focus on performance—the leaching of lead—rather than content. We did not attempt to determine the extent to which the standards for lead- free plumbing products are enforced. According to NSF, the use of plumbing products within a building is generally regulated at the state, county, and city levels through plumbing codes. NSF representatives also said that all model plumbing codes reference NSF Standard 61 for pipes, fittings, and faucets. NSF reports that most faucets sold at the retail and wholesale level are certified to meet Standard 61, but fewer valves and other in-line devices are certified to the standard because it is not required in model plumbing codes. State efforts to implement more stringent standards for plumbing products appear limited, based on our discussions with federal and state regulators and representatives of the water industry and plumbing manufacturers. We identified two states in which such activities have occurred: In California, the Attorney General sued 16 manufacturers and distributors of kitchen and bathroom faucets in the early 1990s, alleging that lead leaching from brass components of their faucets violated California law. The suit resulted in settlement agreements with the companies and a related court decision in which they agreed to reduce leaching levels. According to an official with the California Attorney General’s Office, the limit on lead leaching is 5 parts per billion for residential kitchen faucets and 11 parts per billion for all other faucets. According to officials with the Massachusetts Board of State Examiners of Plumbers and Gas Fitters, in 1995 the board established a 3 percent limit on the lead content of endpoint and in-line devices installed inside the home. Board officials acknowledge that enforcement of the standard is difficult because products containing more than 3 percent lead may be sold in Massachusetts stores as long as the products are not installed in Massachusetts homes. Moreover, the packaging does not indicate lead content or certification to the state standard. At the local level, some water systems are installing no-lead meters—which contain less than 0.25 percent lead—because of concerns about the potential impact of leaded brass meters on lead levels at the tap. In some instances, the water systems are targeting their meter replacement to buildings housing schools and child care facilities. Based on its year-long evaluation of the lead rule and how it is being implemented, EPA concluded that the conditions that led to elevated lead levels in the District of Columbia were not indicative of the conditions nationwide. However, in November 2004, while its evaluation was still ongoing, EPA issued a guidance memorandum to reiterate and clarify specific regulatory requirements after the agency’s review of state programs and some press reports identified inconsistencies in how drinking water systems and the states were carrying out the regulation. The memorandum focused on requirements related to collecting samples and calculating compliance. In addition, in March 2005, EPA announced a Drinking Water Lead Reduction Plan to improve and clarify specific areas of the rule and the agency’s guidance materials. The plan identifies nine targeted revisions of the regulations and updates to two guidance documents. Specifically, EPA’s lead reduction plan calls for regulatory revisions to the following: Monitoring requirements. These revisions would (1) clarify the number of samples required, (2) clarify the number of locations from which samples should be collected, (3) modify definitions of “monitoring period” and “compliance period,” (4) clarify the requirement to take all samples within the same calendar year, and (5) reconsider allowing large water systems that exceed the lead action level to qualify for reduced monitoring as long as their test results for water quality parameters are within acceptable limits. Treatment requirements. These revisions would require water systems to notify the state of treatment changes 60 days prior to the change rather than within 60 days following the change. Customer awareness requirements. These revisions would (1) require water systems to disclose test results to homeowners and occupants who participate in tap monitoring programs and (2) permit states to allow water systems to modify flushing instructions—the amount of time that homeowners are advised to run water before using it—to address local circumstances. Lead service line replacement requirements. These revisions would require water systems to reevaluate lead service lines that previously “tested out” of the replacement program as a result of low lead levels if a subsequent treatment change causes the systems to exceed the action level. In addition, EPA is considering updating its 1994 guidance on lead in drinking water in schools and non-residential buildings, along with its 1999 guidance on simultaneous compliance. So far, EPA has not released additional details on the nature of the changes being considered in some areas (e.g., number of samples and sampling locations) or what prompted its determination that revisions to the lead rule and related guidance might be warranted. An EPA workgroup, which was established when the lead reduction plan was issued, is developing the proposed rule for the regulatory changes, with a goal of releasing a proposal in late 2005 or early 2006. Revisions to the guidance documents are scheduled to be completed about the same time. While the exact nature of some changes has yet to be defined, we asked the 10 states we contacted for their views on whether the proposed revisions would improve implementation of the lead rule. For the most part, state officials were in favor of the proposed changes involving the monitoring protocols. Although they wanted more details on how the requirements would be revised, they believed the changes to be relatively minor. In particular, most state officials agreed that large water systems that exceed the action level should not be allowed to reduce the frequency of lead monitoring based solely on their ability to meet water quality parameters. Regarding earlier notification of treatment changes, officials from all 10 states we contacted supported such a revision, particularly for major treatment changes. The officials indicated that the notification requirement would not have a significant impact on their own practices because each of the states already had some type of process in place to permit or review treatment changes. Five of the states questioned whether 60 days advance notice would be sufficient to allow an adequate review. Several states suggested that EPA should require expedited monitoring of lead levels following major treatment changes—or issue guidance on when it would be appropriate for states to require such monitoring—and that EPA should issue guidance on what constitutes a major treatment change. In addition, officials from two states commented that EPA should require state approval of the treatment changes in addition to advance notification. On the proposed revisions involving customer awareness, all 10 states agreed that homeowners that participate in the tap sampling program should be informed of the test results—particularly if the results for individual homeowners exceed the lead action level—whether or not the 90th percentile result for the entire system exceeds the action level. One state was concerned about the additional resources that would be required to track the water systems’ actions. Nearly all of the states also endorsed the proposal to give states and water systems more flexibility in determining what flushing instructions are appropriate in particular situations. Some states suggested that EPA guidance on making such determinations would be useful. Regarding the proposed reevaluation of lead service lines that tested out of a replacement program, the states’ views were mixed. Although five states generally endorsed the idea, the other five states raised several concerns, including the potential cost to local drinking water systems, the administrative burden that such a requirement would impose on states, and the need for more specific information on the types of treatment changes that would trigger a reevaluation of lead service lines. Over the long term, EPA plans to examine other issues related to lead rule implementation that may need to be addressed through regulation or guidance. EPA officials have indicated that, in some instances, they need more information to determine whether changes are warranted, and they are in the process of collecting and analyzing data, or have relevant research projects underway. According to EPA officials, some of the issues they plan to review include the sampling protocol, monitoring and reporting requirements for consecutive systems, the impact of disinfection treatment on corrosion control, and the requirements for lead service line replacement. Little information exists on the results of activities initiated after enactment of the Lead Contamination Control Act (LCCA) of 1988, including the recall of lead-lined water coolers from schools and child care facilities. More recent efforts to detect and remediate lead in the drinking water at such facilities also appear limited. As a result, the extent to which drinking water may contain unacceptable levels of lead at schools and child care facilities nationwide is uncertain. In addition, no clear focal point exists at the federal or state level to collect and analyze the results of testing and remediation efforts. Moreover, state and local officials say that addressing other environmental hazards at schools and child care facilities takes priority over testing for lead in drinking water. The LCCA, enacted in 1988, laid out a number of requirements for EPA, the Consumer Product Safety Commission, and the states to address the potential risks of lead contamination in water supplies serving schools and child care facilities. Among other things, the act banned the manufacture and sale of drinking water coolers containing lead-lined tanks and other water coolers that are not lead-free, required EPA to publish a list of such coolers and distribute it to the states along with guidance on testing for and remedying lead contamination in drinking water, and required the Consumer Product Safety Commission to issue an order requiring manufacturers and importers to (1) repair or replace the coolers or (2) recall and provide a refund for them because coolers containing lead-lined tanks were deemed to be imminently hazardous consumer products. In addition, the LCCA required states to establish programs to assist local agencies in addressing potential lead contamination. While the nature and extent of state activities varied widely, the program was never funded, according to EPA officials. In 1996, the requirement was determined to be unconstitutional. To support the required recall, EPA identified six models of water coolers containing lead-lined tanks, all produced by one company and manufactured prior to April 1979. EPA could not obtain information on the number of units produced. The Consumer Product Safety Commission broadened the recall order to include all tank-type models of drinking water coolers manufactured by the company, whether or not the models were included on EPA’s list. Under the terms of the order, the manufacturer established a process under which qualified owners of the affected coolers could request a refund or replacement. The manufacturer was also required to notify appropriate officials and organizations, including state and school officials and day care centers, about the recall and the availability of refunds and replacements. Little information is available to determine the effectiveness of the recall effort in removing lead-lined water coolers from service. Not only is the number of coolers affected by the recall unknown, but the Consumer Product Safety Commission did not have summary data on the results of the recall. An agency official confirmed information in a 1991 Natural Resources Defense Council report that, as of 1990, the Commission had received approximately 1,200 inquiries about the recall, 1,373 coolers had been determined to be eligible for replacement, 514 had been replaced, and 105 refunds had been mailed to customers. However, the official also said that many more coolers were replaced after that date and that by 1993, the manufacturer had received approximately 11,000 inquiries about the recall. The official believed that the actual number of replacements was potentially 10 times greater than those reported in 1991 and the refunds four to five times greater. In addition, the recall order did not specify an end date for filing a refund or replacement request so an unknown number of coolers could have been taken out of service without the knowledge of the manufacturer or the Commission subsequent to 1993. According to several state and school officials we interviewed, virtually all of the water coolers affected by the recall have been replaced or removed, either as a result of the publicity surrounding the recall or because they had already been taken out of service. Some of the six models covered by the recall were manufactured in the 1950s and 1960s and are likely to have been retired because of their age or maintenance problems. Beyond the recall effort, little or no data are available to assess the effectiveness of other actions taken in response to the LCCA. For example, little information is available on the extent to which schools and child care facilities were inspected to check potential lead contamination from water coolers that were not lead-free. While the act did not require EPA or the states to track or report on the results of testing, EPA was responsible for publishing guidance and a testing protocol to assist schools in determining the source and degree of lead contamination in school drinking water supplies and remedying such contamination. EPA published guidance for both schools and child care facilities in 1989 and 1994, respectively. We found no information indicating how pervasive lead-contaminated drinking water in such facilities nationwide or within particular states might be, but several studies conducted in the early 1990s contained some limited information on testing efforts: In 1993, we reported on the results of a survey of 57 school districts in 10 states. We found that 47 districts were able to provide data on the results of testing, which showed that about 15 percent of the 2,272 schools tested had drinking water containing levels of lead considered unacceptable by EPA. We also contacted child care licensing agencies in 16 states to obtain information on their activities for addressing lead hazards and found that none of the agencies routinely inspected child care facilities for such hazards. A 1990 report by EPA’s Inspector General found that, of the 13 school districts surveyed, 10 conducted some testing for lead in drinking water and 8 detected contamination, with some results exceeding acceptable levels by a wide margin. According to the Natural Resources Defense Council’s 1991 study, 47 states reported some testing of school drinking water supplies, including 16 states that tested in “a few” to 25 percent of their schools, 27 states that tested from 25 percent to 82 percent of the schools, and 4 states that tested 95 percent or more of their schools. The study also found that 17 states reported testing at child care facilities. In addition to these earlier studies, in 2004 EPA asked the states to provide information on current state and local efforts to monitor and protect children from lead exposure in drinking water at schools and child care facilities. As part of that effort, seven states also reported on the results of local testing following passage of the LCCA, stating that elevated lead levels were found in at least some of the locations tested. However, the states differed significantly in the extent of their testing and how they summarized the results. In five of the states, the results generally ranged from about 1 percent to 27 percent of samples, facilities, or districts with lead levels considered unacceptable by EPA—but the other two states finding elevated lead levels used a different assessment measure. The extent of current testing and remediation activities for lead in school and child care facility drinking water appears limited. The LCCA does not require states to track or report such activities and, based on the information that EPA collected from the states in 2004 and our own contacts in 10 states, few states have comprehensive programs to detect and remediate lead in drinking water at schools and child care facilities. Figure 4 shows the nature and extent of these activities; about half the states reported no current efforts. Of the five states that reported having testing requirements, four— Connecticut, New Hampshire, South Carolina, and Vermont—require child care facilities to test their drinking water for lead contamination when obtaining or renewing their licenses. In the fifth state (Massachusetts), the testing requirement focuses on schools. Water systems must include two schools among their sampling sites in each round of lead testing, although the school data are not included in the 90th percentile calculation to determine whether lead levels exceed the action level. Massachusetts officials told us that, although the testing requirement has been in place since 1992, it has not received much attention until recently. The officials acknowledged that most water systems repeatedly used the same schools as sampling sites for the sake of convenience and said that the state has never summarized the results of the school testing. Given the renewed concerns about lead contamination following the detection of lead in the District of Columbia’s drinking water, Massachusetts now requires water systems to rotate testing among schools and child care facilities and plans to issue a summary report at the end of 2005. In addition to these requirements, Florida’s Department of Environmental Protection reported to EPA that it had established a voluntary program. Specifically, the state designated child care facilities as Tier 1, high risk sites and gave water systems the option of using the facilities as lead sampling sites and including them in the calculation of the 90th percentile lead level. (According to a Florida official, to be included as a sampling site, the child care facility must meet other Tier 1 criteria, such as being served by a lead service line.) However, when we followed up with state officials, they said that they had no way of tracking the extent to which water systems were actually including child care facilities as sampling sites. The scope of the targeted testing reported by 12 states varied widely, from a single school district in Pennsylvania to over 1,300 homes and child care facilities in Indiana. Several states indicated that they were focusing on potential high risk locations. EPA regional offices helped to initiate some limited testing in a few states, including Massachusetts, New Jersey, New York, and Pennsylvania; the testing generally focused on a few of the states’ largest school districts. The state-sponsored surveys to determine the status of testing by local agencies also varied, with some covering all schools within the state and others focusing on a smaller subset of schools. In Washington, the state recently set aside $750,000, including $400,000 from its drinking water state revolving fund, to partially reimburse school districts for the cost of monitoring for lead in elementary schools’ drinking water. EPA officials attributed the relatively low level of state activity in recent years to the aftereffects of a 1996 lawsuit brought by the Association of Community Organizations for Reform Now against the state of Louisiana for not doing enough to implement the LCCA. The case resulted in a federal circuit court decision declaring that part of the LCCA was unconstitutional. Specifically, the court ruled that the federal government did not have the authority to require states to establish a remedial action program as outlined in the LCCA. While Louisiana reported to EPA that the case “had the unintended effect of ending the lead program in schools for the state of Louisiana,” none of the 10 states we contacted cited the ruling as a factor in limiting their efforts. To obtain more information about testing and remedial actions in individual cities, we contacted five school districts—Boston, Detroit, Philadelphia, Seattle, and Syracuse. Table 11 shows the extent and results of testing within each district, and provides information on the various approaches school administrators have used to address the lead contamination. 1 F.3d 137 (5th Cir. 1996). Scope: Testing focused on kitchen facilities used to prepare food and was conducted between 2003 and 2004 at the district’s central kitchen facility and 38 schools with on-site kitchen facilities. Actions: Manual flushing for at least 1 minute each day in all kitchens and an automatic flushing program at the central kitchen and 22 school buildings with kitchen facilities. Results: Lead levels in water from 17 kitchen facilities, including the central kitchen, exceeded 20 ppb. Cost: Not available. Scope: The district tested 21 water fountains and other outlets in one middle school as of November 2002. (Testing was also conducted at one other middle school, but the number of outlets included was not available.) Actions: For the short term, shutting off outlets with elevated lead levels, doing manual flushing, and providing bottled water. For the long term, installing a water treatment system, replacing lead piping and fixtures, and re-routing a service line serving the school. Results: Lead levels in water from 16 drinking water outlets in one middle school exceeded 15 ppb. Cost: An estimated $9,000 for bottled water and $5,865 for the water treatment system, plus $800 in annual maintenance costs. Scope: As a result of consent orders in 1999 and 2000, the school district was required to test all drinking water outlets at 299 schools and other buildings, or about 30,000 outlets in total. Actions: For the short term, shutting off outlets with elevated lead levels and providing bottled water. For the long term, replacing or removing fixtures. Results: As of March 2004, the district had detected lead levels over 20 ppb in approximately 4,600, or roughly 15 percent, of the outlets tested. Cost: An estimated $6 million through February 2005. Scope: In 2004, the district tested all interior drinking water outlets considered suitable for use, about 2,400 outlets in total. Results: Lead levels at 600 of the outlets, or 25 percent, exceeded 20 ppb. Actions: For the short term, shutting off outlets with elevated lead levels and providing bottled water. For the long term, fixing or replacing fixtures, installing filters, and replacing piping for any outlet where lead levels exceeded 10 ppb. Cost: An estimated $15 million upon completion in 2007. Scope: The district tested specific interior drinking water outlets in 50 schools and other buildings, beginning in August 2003. Results: 23 of the facilities had at least one drinking water outlet with lead levels over 20 ppb. Actions: For the short term, shutting off outlets with elevated lead levels. For the long term, installing in-line carbon filters at each outlet with elevated lead levels. (Other measures such as pipe replacement and removal of fixtures are still under discussion.) Cost: An estimated $100,000 through March 2005. Boston officials told us that they focused on kitchen facilities in their most recent testing because the district had already installed bottled water at many drinking water outlets after earlier testing had disclosed elevated lead levels. Both Philadelphia and Seattle had also conducted some testing prior to the more recent efforts summarized in this table. uirement to test bathroom faucets. The cities we contacted differed in the testing protocols they used to test for lead in school drinking water. While three of the cities (Boston, Philadelphia, and Syracuse) followed EPA’s guidance, using a 250 milliliter sample and a limit of 20 parts per billion for triggering follow-up action, Seattle took a more conservative approach. Using the same sample volume, the school board established 10 parts per billion as its standard for follow- up action. Detroit, on the other hand, used the same protocol that is required for public water systems—a 1 liter sample and 15 parts per billion as the limit. Some of the remediation measures adopted by the cities we contacted were effective, including installing in-line filters, replacing pipes, and removing fixtures at outlets with test results indicating high lead levels. Other measures required more attention and others inadvertently created new issues for officials to deal with. For example, a Seattle school official noted that the district decided against instituting a flushing program in its schools because it was too difficult to ensure that staff in individual schools would follow through with the flushing every day. In Boston, a school official told us that using bottled water posed a problem because staff had to make sure that replacement bottles were always available and because it created other issues with pests, vandalism, and spillage. While a number of cities have detected elevated lead levels in school drinking water, and a few states are beginning to collect information on the status of local testing efforts, little information exists on the extent to which drinking water at schools and child care facilities nationwide may contain unacceptable levels of lead. No focal point exists at the federal or state level to collect and analyze test results or information on cost- effective remediation strategies. As a result, it is difficult to get a sense of the pervasiveness of lead contamination in the drinking water at schools and child care facilities, and to know whether a more concerted effort to address the issue—such as mandatory testing—is warranted. In addition, remediation measures such as providing bottled water, regularly flushing water lines, installing filters, and replacing fixtures and internal piping vary widely in cost and complexity, among other factors. State and local officials have expressed concern about not having sufficient information on the measures, their pros and cons, and circumstances in which particular measures might be more appropriate than others. At the federal level, EPA’s Office of Ground Water and Drinking Water sets drinking water standards and other requirements for public drinking water systems, but generally does not have any direct oversight responsibility for the quality of drinking water in schools or child care facilities. The U.S. Department of Education (Education) is responsible for, among other things, providing guidance and financial assistance to state and local education agencies for elementary and secondary schools. Education’s Office of Safe and Drug Free Schools recently signed a memorandum of understanding with EPA, the Centers for Disease Control and Prevention, and various water industry associations with the goal of reducing children’s exposure to lead in drinking water at schools and child care facilities. However, according to an Education official, the department does not have legal authority to compel schools to test for lead in the drinking water. Officials in Washington state saw a need for closer coordination between EPA and Education. The officials believe that local education officials are more likely to respond to guidance on lead and other environmental health issues if Education were to be involved in developing it. At the state level, responsibility for the environmental health of schools and child care facilities is usually fragmented among multiple agencies. According to EPA, in most states, the same agency that administers the drinking water program—generally the state’s department of environmental protection or department of health—is also responsible for implementing the LCCA. However, we also learned from EPA that the state agencies responsible for administering education programs and licensing child care facilities are usually the ones with the regulatory or oversight authority over environmental conditions in schools and child care facilities. (As noted earlier, some states also have lead poisoning prevention programs to monitor blood lead levels in children and investigate the source of lead exposure when the levels are elevated.) According to some of the states we contacted, the level of coordination among state agencies needs to be improved and the lack of a centralized authority at the state level has complicated efforts to plan and implement a testing program for lead in water in some school districts. For example, in Pennsylvania, state drinking water officials said that several other agencies, including the Departments of Health, Education, and Public Welfare, have a role in overseeing schools and child care facilities—but it was unclear which agency would be best suited to manage a testing program if one were to be required. In contrast, Connecticut officials said that having both the drinking water program and the child care licensing program housed within the same department has been an advantage because it is easier for the programs to share information and coordinate their activities. We also contacted several school and child care associations to find out if they were involved in or aware of efforts to promote testing for lead in drinking water, collect and analyze the results of testing, or set standards for the environmental health of the facilities. According to a representative of the National Child Care Association, until recently the association had not been aware of any issues regarding lead in drinking water at child care facilities or involved in any effort to promote testing. The representative commented that one challenge to distributing information on lead in drinking water to child care facilities is the fragmented nature of the child care industry. While the National Head Start Association has been involved with lead poisoning prevention in general, the organization has not done anything specifically related to lead in drinking water. The Healthy Schools Network, Inc. promotes the development of state and national policies, regulations, and funding for environmentally safe and healthy schools. Although the network has published some fact sheets that address the potential health risks from lead exposure, lead in drinking water has not been a priority compared with other environmental issues. While none of these organizations were parties to EPA’s recent memorandum of understanding, they have been actively engaged in assisting EPA as the agency revises its guidance for schools and child care facilities, according to EPA officials. According to state and local officials, children may be exposed to a variety of environmental hazards at schools and child care facilities, including asbestos, lead in paint or dust, mold, and other substances that affect indoor air quality. The officials told us that dealing with such problems often takes priority over checking for lead in drinking water because, in the case of the other problems, more information is available on the nature and extent of the potential health risks involved. For example, many of the officials we interviewed said that the most significant source of lead exposure—and thus, their primary concern—was lead in paint. Officials from two states also mentioned that lead in jewelry, toys, or pottery is a more significant source of exposure than lead in drinking water. Washington state officials told us that child care facilities also have many competing priorities and cited food handling as one of their major concerns. At the local level, officials talked about dealing with multiple health and safety issues and the difficulty of prioritizing limited resources. For example, in Detroit, one official told us that dealing with asbestos takes priority over all other environmental concerns, including lead in drinking water. Another Detroit official commented that indoor air quality is another priority because “issues related to breathing are very important to educators.” In Philadelphia, a school official noted that a major source of lead in the school district is dust, a problem that requires continuing attention from the maintenance staff, which must set aside time to scrub the areas where dust collects. A Seattle official also mentioned the difficulty posed by competing needs for limited funds. He indicated that the competition is not only among environmental issues, such as mold and asbestos, but, on a broader level, between maintenance and basic classroom expenditures. Without additional resources—or more compelling evidence that lead in drinking water should be a higher priority—state and local officials, as well as representatives of industry groups, were reluctant to support calls for mandatory testing for lead in drinking water in schools and child care facilities. Many of the officials we interviewed said that more research is needed on several aspects of the lead issue. In addition to wanting more information on the extent to which lead contamination in schools and child care facilities is a problem, some officials also wanted more information on the circumstances in which particular remediation approaches are most effective. Other officials believe that more research is needed on the relationship between children’s exposure to lead in drinking water and their blood lead levels. Ensuring that the lead rule adequately protects public health and is fully implemented and enforced should be a high priority for EPA and the states because the potential consequences of lead exposure, particularly for infants and young children, can be significant. However, EPA’s hands are tied unless states report complete, accurate, and timely data on the results of required monitoring, the status of corrective actions, and the extent of violations. Without such information, EPA cannot provide effective oversight or target limited resources where they are most needed. Similarly, inconsistencies among the states’ policies and practices for implementing the lead rule may lead to uneven levels of public health protection for consumers and thus need to be examined and corrected, as appropriate. Given the potential health effects associated with lead contamination, it is important to minimize any unnecessary exposure as a result of leaded materials in the water distribution system or household plumbing. Reevaluating existing standards for the devices used in or near residential plumbing systems would also enhance the effectiveness of the treatment provided by local water systems. In the case of schools and child care facilities, both the vulnerability of the population served by such facilities and the competition for limited resources make it essential to have better information on the nature and extent of lead-contaminated drinking water—and its significance relative to other environmental hazards. We recommend that the Administrator, EPA, take a number of steps to further protect the American public from elevated lead levels in drinking water. Specifically, to improve EPA’s ability to oversee implementation of the lead rule and assess compliance and enforcement activities, EPA should ensure that data on water systems’ test results, corrective action milestones, and violations are current, accurate, and complete and analyze data on corrective actions and violations to assess the adequacy of EPA and state enforcement efforts. To expand ongoing efforts to improve implementation and oversight of the lead rule, EPA should reassess existing regulations and guidance to ensure the following: the sites water systems use for tap monitoring reflect areas of highest risk for lead corrosion; the circumstances in which states approve water systems for reduced monitoring are appropriate and that systems resume standard monitoring following a major treatment change; homeowners who participate in tap monitoring are informed of the test states review and approve major treatment changes, as defined by EPA, to assess their impact on corrosion control before the changes are implemented. In addition, EPA should: collect and analyze data on the impact of lead service line replacement on lead levels and conduct other research, as appropriate, to assess the effectiveness of lead line replacement programs and whether additional regulations or guidance are warranted; collect information on (1) the nature and extent of modified sampling arrangements within combined distribution systems and (2) differences in the reporting practices and corrective actions authorized by the states, using this information to reassess applicable regulations and guidance; and evaluate existing standards for in-line and endpoint plumbing devices used in or near residential plumbing systems to determine if the standards are sufficiently protective to minimize potential lead contamination. In order to update its guidance and testing protocols, EPA should collect and analyze the results of any testing that has been done to determine whether more needs to be done to protect users from elevated lead levels in drinking water at schools and child care facilities. In addition, to assist local agencies in making the most efficient use of their resources, EPA should assess the pros and cons of various remediation activities and make the information publicly available. We provided a draft of this report to EPA and the Consumer Product Safety Commission for review and comment. EPA generally agreed with our findings and recommendations. Regarding the completeness of information that EPA has to evaluate implementation of the lead rule, the agency said that it will work with the states to ensure that relevant information is incorporated into the national database and will use the information, in part, to assess the adequacy of enforcement efforts. In addition, EPA agreed that aspects of the regulation need improvement. EPA said that it will address some of these areas as part of its package of revisions to the lead rule that it plans to propose early in 2006, including homeowner notification of test results and criteria for reduced monitoring. EPA also said that it needs additional information before it can address other areas, such as lead service line replacement and plumbing standards, that may warrant regulatory changes. EPA did not comment on our recommendation to reevaluate existing regulations and guidance to ensure that tap monitoring sites reflect areas of highest risk for lead corrosion. Finally, EPA did not address our recommendations regarding lead contamination and remedial actions at schools and child care facilities. We believe that, given the particular vulnerability of children to the effects of lead, it is important for EPA to take full advantage of the results of any tests that have been done, as well as to identify those remedial activities that have proven to be most effective. EPA’s comments appear in appendix V. The Consumer Product Safety Commission generally agreed with our findings as they pertain to the Commission. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees; the Administrator, EPA; the Chairman, Consumer Product Safety Commission; and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. For information on how the lead rule is being implemented, we obtained information from the Environmental Protection Agency’s (EPA) Office of Ground Water and Drinking Water and Office of Enforcement and Compliance Assurance, eight EPA regional offices, and 10 states. We selected eight of the states—California, Illinois, Iowa, Massachusetts, Michigan, New York, Pennsylvania, and Washington—because they either had a relatively high number of water systems with test results that exceeded or fell just below the lead action level, or they added to the geographical diversity of our selections. We also included Connecticut and Florida in our review because they were identified by EPA as particularly active in addressing potential lead contamination in water supplies serving child care facilities. At the local level, we obtained information from eight water systems: the Chicago Water Department in Illinois, the Boston Water and Sewer Commission and Massachusetts Water Resources Authority in Massachusetts, the Detroit Water and Sewerage Department in Michigan, the Syracuse Water Department in New York, the Portland Bureau of Water Works in Oregon, the Philadelphia Water Department in Pennsylvania, and Seattle Public Utilities in Washington. Our criteria for selecting these systems included test results showing elevated lead levels, lead service line replacement activity, and/or the use of modified sampling arrangements for consecutive systems. We reviewed the Safe Drinking Water Act, the lead rule, EPA’s minor revisions to the lead rule, other pertinent regulations, and applicable guidance to states and water systems. To gain a national perspective on the data EPA uses for oversight of lead rule implementation, including the results of required testing, the status of corrective actions, and the extent of violations, we analyzed data from EPA’s Safe Drinking Water Information System through June 2005 for active community water systems. We assessed the reliability of the data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, (3) interviewing agency officials knowledgeable about the data, and (4) reviewing EPA’s own data verification audits and summaries of data reliability. We determined that the data on results and frequency of lead testing were sufficiently reliable to show compliance trends. However, we found that other data on corrective actions and violations were not sufficiently reliable to assess the status of efforts to implement and enforce the lead rule. For information on experiences in implementing the lead rule and the need for changes to the regulatory framework, we interviewed EPA, state, and local officials; analyzed states’ responses to an EPA information request regarding their policies and practices in implementing the rule; and reviewed other relevant studies and documents. We reviewed the results of EPA’s expert workshops on monitoring protocols, simultaneous compliance, lead service line replacement, and public education, and obtained information from several researchers and other drinking water experts. Among other things, we identified potential gaps in the regulatory framework, including oversight, regulations, and guidance, and obtained views on the modifications to the lead rule now being considered by EPA. To learn about the development and effectiveness of existing plumbing standards, we obtained and analyzed information from NSF International (NSF), the Copper Development Association, the Plumbing Manufacturers Institute, and relevant articles and studies. To assess the reliability of NSF’s data on lead content and lead leaching of plumbing fittings and fixtures, we talked with foundation officials about data quality control procedures. We determined the data were sufficiently reliable for illustrative purposes. For information on safeguards against lead-contaminated drinking water at schools and child care facilities, we interviewed officials from the Consumer Product Safety Commission, EPA’s Office of Ground Water and Drinking Water, the National Head Start Association, the National Child Care Association, and the Healthy Schools Network. We also obtained information from drinking water program offices and public health or education departments in the 10 states we contacted for the first objective as well as school districts in Boston, Chicago, Detroit, Philadelphia, Seattle, and Syracuse. We reviewed the Lead Contamination Control Act (LCCA) of 1988 and obtained information on the recall of lead-lined water coolers. For information on other actions taken in response to the LCCA, we interviewed EPA, state, and local officials; reviewed relevant studies; and analyzed information collected by EPA. We used the same information sources to determine (1) the extent of current testing and remediation activities for lead in school and child care facility drinking water, (2) the extent to which various entities have responsibility for overseeing or collecting data on such activities, and (3) the relative priorities among environmental hazards common to schools and child care facilities. We also analyzed states’ responses to an EPA information request on state and local efforts to monitor and protect children from lead exposure and attended an EPA-sponsored expert workshop on lead in drinking water at schools and child care facilities. For more detailed information on experiences at the local level, we collected information from five school districts on the extent of testing for lead in school drinking water, the results, and the approaches used to address contamination. We performed our work between June 2004 and November 2005 in accordance with generally accepted government auditing standards. Percent of total systems with Legend: TT = treatment techniue violations, including failure to install optimal corrosion control treatment, failure to meet water uality control parameters, failure to replace lead service lines, and failure to meet public education reuirements, among other things. MR = monitoring and reporting violations, including the failure to conduct reuired testing and failure to report the results. We included the most commonly used enforcement actions in this table and excluded miscellaneous actions and activities unrelated to enforcement or the lead rule. EPA files a “complaint for penalty” when the terms of an administrative order are violated. In addition to the individual named above, Ellen Crocker, Nancy Crothers, Sandra Edwards, Maureen Driscoll, Benjamin Howe, Julian Klazkin, Jean McSween, Chris Murray, and George Quinn, Jr. made key contributions to this report.
Elevated lead levels in the District of Columbia's tap water in 2003 prompted questions about how well consumers are protected nationwide. The Environmental Protection Agency (EPA), states, and local water systems share responsibility for providing safe drinking water. Lead typically enters tap water as a result of the corrosion of lead in the water lines or household plumbing. EPA's lead rule establishes testing and treatment requirements. This report discusses (1) EPA's data on the rule's implementation; (2) what implementation of the rule suggests about the need for changes to the regulatory framework; and (3) the extent to which drinking water at schools and child care facilities is tested for lead. EPA's data suggest that the number of drinking water systems with elevated lead levels has dropped significantly since testing began in the early 1990s. However, EPA's database does not contain recent test results for over 30 percent of large and medium-sized community water systems and lacks data on the status of water systems' efforts to implement the lead rule for over 70 percent of all community systems, apparently because states have not met reporting requirements. In addition, EPA's data on water systems' violations of testing and treatment requirements are questionable because some states have reported few or no violations. As a result, EPA does not have sufficient data to gauge the rule's effectiveness. Implementation experiences to date have revealed weaknesses in the regulatory framework for the lead rule. For example, most states do not require their water systems to notify homeowners that volunteer for periodic lead monitoring of the test results. In addition, corrosion control can be impaired by changes to other treatment processes, and controls that would help avoid such impacts may not be adequate. Finally, because testing indicates that some "lead-free" products leach high levels of lead into drinking water, existing standards for plumbing materials may not be sufficiently protective. According to EPA officials, the agency is considering some changes to the lead rule. On the basis of the limited data available, it appears that few schools and child care facilities have tested their water for lead, either in response to the Lead Contamination Control Act of 1988 or as part of their current operating practices. In addition, no focal point exists at either the national or state level to collect and analyze test results. Thus, the pervasiveness of lead contamination in the drinking water at schools and child care facilities--and the need for more concerted action--is unclear.
NOAA has a process for updating its flyout charts, but this process is not established in policy. NOAA’s process for updating its flyout charts involves obtaining updated information on the health of operational satellites from internal specialists and program based studies, such as satellite availability assessments; obtaining updated information on the launch schedules for new satellites; having relevant individuals and entities review the updated charts; and obtaining approval from the Assistant Administrator of the National Environmental Satellite, Data, and Information Service (NESDIS) to publish the new chart on its public facing website. This process is triggered at least once a year in preparation for the budget process or more often when important changes occur, such as a loss of use of a satellite or when a budget decision affects a launch date. Officials estimated that it can take several weeks to prepare, review, and obtain consensus to publish the new charts. This process is partially documented in a draft policy from June 2011. NESDIS officials stated that the draft policy serves as documentation of the agency’s process; however, the policy was never finalized and is currently out-of-date because several details have changed since 2011. For example, the outdated draft does not include the name of the office responsible for updating the flyout charts, or define the use of fuel limited life to estimate the lifespan of operational satellites. According to the NESDIS Assistant Administrator, the agency recently appointed a new director with responsibility for updating and finalizing the policy. However, NESDIS officials have not yet established a schedule for releasing the updated policy. Without a revised and finalized policy in place to govern the flyout chart process, NOAA runs an increased risk that its practices will be inconsistent and unclear. Using the process outlined in NOAA’s draft policy, program officials have updated the geostationary and polar-orbiting satellite flyout charts three times since March 2014. Key changes included adding newly planned satellites; removing a satellite that reached the end of its life; and adjusting planned dates for when satellites would launch, begin operations, and reach the end of their lives. For example, in one set of changes between April 2015 and January 2016, NOAA (1) delayed the GOES-R launch date from March to October 2016, resulting in a corresponding move in its estimated end-of-life, and (2) added 6-month on-orbit checkout periods to the next three satellites in the series, called GOES-S, T, and U satellites. Figure 1 shows the changes for NOAA’s geostationary satellites between April 2015 and January 2016. While NOAA regularly updates its flyout charts and most of the data on specific satellites were aligned with supporting program documents, the agency has not consistently ensured that its charts were supported by stringent analysis, accurate, clearly communicated, and fully documented. As specified by relevant guidance to agencies for facilitating congressional decision-making and enforcing government internal controls, agencies should ensure that the information presented to Congress is accurate, clear, and supported by documentation. Our review of changes to the flyout charts since March 2014 found that most of the updates to the flyout charts accurately reflected relevant information from satellite program offices at the time they were published; and were subject to internal review before they were finalized as evidenced by summary packages which can include e-mails, documents tracking the changes, and the official approval of the charts by the Assistant Administrator. However, in its efforts to provide updated flyout charts to Congress, the agency has not consistently ensured that the data were (1) supported by stringent analyses of the satellites’ health and availability; (2) accurate and consistent with supporting program data; (3) clear in how a satellite’s extended life is portrayed; and (4) fully documented. More specifically: Stringent analysis: NOAA does not require regular satellite availability assessments for any of its environmental satellite programs. Satellite managing agencies often perform technical assessments of health and future availability of operational satellites to aid in planning and budgeting. For example, the Air Force requires satellite programs to complete an independent assessment of satellite and constellation health each year as part of its budget preparations. While NOAA conducts an annual assessment for its JPSS program, it does not conduct such assessments for its GOES-R program. Without requiring regular availability assessments for all satellites, NOAA risks not having timely information on the probability of continued success of its operational satellites for program budgeting purposes. Accuracy: NOAA’s flyout chart updates are not always accurate and consistent with agency documentation including program schedules for future satellites and polar availability assessments. Out of 26 instances where we compared flyout chart data to underlying program data for a particular satellite, we identified 2 instances where the flyout charts did not accurately depict the underlying program schedules. For example, the flyout charts showed launch dates for two satellites as 4 months earlier and 3 months later than program data. Both of these issues were later corrected when the next chart was updated 6- 12 months later. However, they were inaccurate at the time they were provided to Congress. In addition, NOAA’s updates were at times inconsistent with the polar satellite availability assessment data. For example, NOAA’s January 2016 flyout chart depicts JPSS-1 lasting through March 2024 while a 2015 availability assessment shows only a 55 percent probability that the satellite will be fully functional in 2024. JPSS program officials explained that polar-orbiting availability assessments are used to only show degrading health over time, while the flyout charts portray expected satellite lifespans. However, we believe it is not accurate to show a satellite as functioning on the flyout chart when underlying analyses show that the satellite is unlikely to be fully functioning. A part of the reason for this lack of consistency is that NOAA does not have a policy in place that requires taking steps to ensure the accuracy of its charts. Until NOAA ensures its flyout charts correctly represent the best available knowledge on the health and availability of the satellite, the agency runs an increased risk that its charts will not be useful or trusted to inform the budget and appropriations processes and provide program updates. Clarity: NOAA does not clearly and consistently depict how long a satellite might last once it is beyond its design life. For example, NOAA received a contractor study in 2005 showing that its geostationary satellites were likely to last a total of 10 years after launch, which was beyond the initial 7-year design life. Although the study was conducted in 2005, the agency did not update the satellites’ expected lives on the flyout charts until 2015. Similarly, in its 2015 and 2016 charts, NOAA shows its expectation that the NOAA-18 and 19 satellites would last 1 more year by extending the expected operation of the satellites by 1 year, even though the polar availability assessments show that they will likely last longer. In addition, in 2015 and 2016, NOAA adjusted its flyout charts to show extended life on three GOES satellites and the Suomi National Polar-orbiting Partnership (S-NPP) satellite using an extension labeled as “fuel limited life” in 2015. The agency later explained that this term is intended to show the maximum possible life assuming all instruments and the spacecraft continue to function, and not the satellite’s expected life. However, the agency did not clearly define this term on its charts, thereby allowing readers to assume that the agency expects the satellites to last through the end of the fuel-limited life period. Part of the reason that NOAA does not consistently describe how long a satellite is expected to last is that the agency does not have a policy in place requiring a standard approach or nomenclature. Until the agency establishes a consistent approach to describing a satellite’s extended life, it is at risk that its charts will be misconstrued, including by those making budget and appropriations decisions. Documentation: While standard internal controls and NOAA’s draft policy calls for documenting the reasons for changes to the flyout charts and the executive approval for those changes, NOAA does not consistently document the justification for its updates. For example, of the six geostationary and polar summary packages we received by NOAA, three included justification for at least one key change and three did not include key program documentation for the changes to the flyout charts. Furthermore, based on the 27 key changes we noted on the flyout charts between March 2014 and January 2016, 9 were justified in NOAA documentation and 18 were not. Program officials explained that documentation supporting each change exists and is widely circulated and vetted; however, we were unable to find this documentation in the packages provided by NOAA. Part of the reason for the inconsistencies is that NOAA does not have a policy in place requiring the creation and approval of standard justification packages. Until the agency documents and maintains a standard justification and approval package for each update, it risks not having all of the information it needs to justify a change to its flyout charts. While NOAA has a process in place for updating its flyout charts and it regularly updates them, the agency’s process has multiple shortcomings and is not established in policy. Between March 2014 and January 2016, agency officials revised the flyout charts three times to add newly planned satellites; remove a satellite that ceased operations; and change the expected dates for launch, beginning operations, and end-of-life. In its efforts to update its flyout charts, NOAA provides regular updates that are mostly consistent with supporting documentation. However, the agency does not require its satellite programs to conduct regular assessments of satellite availability, which could aid in determining how long its satellites will likely last. Moreover, the information in the flyout charts is not always consistent with supporting agency documentation; is not always consistent in how it presents a satellite’s extended life; and is not always supported by a complete justification package. Part of the reason for these issues is that NOAA has not established a policy that includes these steps. Until NOAA addresses the shortfalls in its practices and updates its policy to help ensure the flyout charts are accurate, consistent, and well-documented, it runs an increased risk that its flyout charts will be misleading to Congress and may lead to less-than-optimal decisions. Given the importance of providing accurate and clear information to facilitate congressional decision making and inform the public, we are making the following five recommendations to the Secretary of Commerce. Specially, we recommend that the Secretary direct NOAA’s Assistant Administrator for Satellite and Information Services to take the following actions for its geostationary and polar-orbiting satellite programs: Require satellite programs to perform regular availability assessments and use these analyses to inform the flyout charts and support its budget requests. Ensure that flyout chart updates are consistent with supporting data from the program and from satellite availability assessments. Establish and implement a consistent approach to depicting satellites that are expected to last beyond their design lives. For each flyout chart update, maintain a complete package of documentation on the reasons for any changes and executive approval of the changes. Revise and finalize the draft policy governing how flyout charts are to be updated to address the shortfalls with analysis, accuracy, consistency, and documentation noted in the above recommendations. We provided a draft of this report to the Department of Commerce for review and comment. We received NOAA’s written comments from the Department of Commerce, which are reproduced in appendix II. NOAA concurred with all five of our recommendations and noted that it plans to implement a more consistent approach in updating its flyout charts. The agency added that our review provided valuable feedback concerning how we and Congress use the charts, and underscored the importance of ensuring that viewers understand that complex operational and acquisition decisions cannot be depicted in a single graphic. NOAA also provided technical comments, which we have incorporated into our report and the briefing slides in appendix I, as appropriate. In response to our first recommendation to require satellite programs to perform regular availability assessments and use them to inform the flyout charts, NOAA concurred and noted that operations personnel perform regular health and status monitoring for satellites under their command and control, make predictions of fuel-limited life, and post status updates to the operational satellite web pages. NOAA also noted that upcoming JPSS and GOES-R satellites will transmit more health data which will enable more complete availability assessments in the future. We acknowledge that performing health and status monitoring of satellites in orbit is important. Further, conducting availability assessments for all satellites should help the agency understand the potential instrument failure before the end of fuel-limited life and enable timely and accurate information for program and budget planning. In response to our second recommendation to ensure that flyout chart updates are consistent with supporting data, NOAA concurred while acknowledging the risk of a reader reaching inaccurate conclusions from its flyout charts. NOAA explained that its flyout charts are not meant to be a replacement for more detailed charts and documentation, which are made available to Congress. We believe that this risk would be reduced if the charts were checked to ensure they accurately reflect the underlying data. NOAA also concurred with our third recommendation to establish and implement a consistent approach to depicting satellites expected to last beyond their design lives. Prior to providing this report, we obtained comments from NOAA on the recommendations in our briefing provided to subcommittee staff on May 31, 2016. While NOAA initially partially concurred with this recommendation in the briefing, NOAA subsequently concurred and acknowledged the need to establish a consistent approach across satellites. The agency concurred with our fourth recommendation to maintain a complete package of documentation on the reasons for any changes to the flyout charts and the approval of those changes. NOAA stated that it updated its draft policy governing its flyout chart process to include a requirement to maintain documentation for flyout chart changes. The agency stated that while it has not maintained more detailed information, it will now do so. NOAA also concurred with the fifth recommendation to finalize its draft policy governing how flyout charts are to be updated. The agency noted that the new policy is in internal coordination and should be formally approved very soon. We believe that addressing our recommendations to improve processes and policies will help ensure that the flyout charts NOAA uses to inform Congress and other stakeholders are supported by strong analyses, accurately reported, and clearly communicated. We are sending copies of this report to interested congressional committees, the Secretary of Commerce, the Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact named above, Colleen Phillips (Assistant Director), Torrey Hardee (Analyst in Charge), Chris Businsky, Shaun Byrnes, Rebecca Eyler, Franklin Jackson, and Umesh Thakkar made key contributions to this report.
NOAA manages two weather satellite programs that provide critical environmental data used in weather forecasts and warnings: a geostationary and a polar-orbiting satellite program. The agency is acquiring the next generation of satellites to replace existing satellites that are approaching the end of their expected lives. NOAA regularly publishes timelines, called flyout charts, depicting its expectations for how long its operational satellites will last and when it plans to launch new satellites. These charts are used to support budget requests, provide status reports, facilitate appropriations discussions with congressional committees, and inform the public. GAO was asked to review NOAA's recent flyout charts. GAO's objectives were to (1) describe NOAA's process for updating its satellite flyout charts; (2) identify changes NOAA has made to its flyout charts in recent years and the justification for those changes; and (3) assess NOAA's recent efforts to update its flyout charts. To do so, GAO reviewed agency policies and procedures for updating its charts; analyzed changes made to the charts since March 2014; and compared NOAA's approach to Air Force practices, internal control standards, and program documentation. The National Oceanic and Atmospheric Administration's (NOAA) process for updating its flyout charts involves obtaining updated information on the health of operational satellites and schedules for new satellites, having relevant individuals review the updated charts, and obtaining approval from a senior NOAA official to publish the charts. This process is partially documented in a 2011 draft policy. NOAA updated the geostationary and polar-orbiting flyout charts three times between March 2014 and January 2016. Key changes included adding newly planned satellites; removing a satellite that reached the end of its life; and adjusting planned dates for when satellites would launch, begin operations, and reach the end of their lives. For example, in one set of changes between April 2015 and January 2016, NOAA extended the life of older polar orbiting satellites by 1 year, added a new fuel limited life period to its most recently launched satellite (called S-NPP), and changed the launch date and the end-of-life date for another satellite (called JPSS-2), as shown below. While NOAA has regularly updated its flyout charts and most of the data on specific satellites were aligned with supporting program documents, it has not consistently ensured that the data were supported by stringent analysis, consistent with underlying program data, clearly communicated, and fully documented. For example, unlike the Air Force, NOAA does not require regular availability assessments for its satellite programs. Also, NOAA's flyout chart updates are not always accurate and consistent with program schedules and polar availability assessments. Further, NOAA does not fully document its changes to the charts. For example, GAO's assessment of 27 key changes between March 2014 and January 2016 showed that 9 were justified in NOAA documentation and 18 were not. Part of the reason for these issues is that NOAA has not established a clear policy to standardize its approach. Until NOAA addresses the shortfalls in its practices and revises and finalizes its draft policy to help ensure the charts are accurate, consistent, and well documented, it runs an increased risk that its flyout charts will be misleading to Congress and may lead to less-than-optimal decisions. GAO recommends that NOAA take steps to improve the accuracy and consistency of its flyout charts, and to revise and finalize the draft policy for updating its flyout charts to address the shortfalls GAO noted. NOAA agreed with GAO's recommendations and identified plans to implement them.
The National Defense Authorization Act for Fiscal Year 2002 extended the authority of the Defense Base Closure and Realignment Act of 1990, with some modifications, to authorize an additional BRAC round in 2005. The legislation also required that DOD provide Congress in 2004, as part of its budget justification documents, a 20-year force structure plan, a worldwide infrastructure inventory, a description of the infrastructure necessary to support the force structure plan, a discussion of categories of excess infrastructure and infrastructure capacity, an economic analysis of the effect of BRAC on reducing excess infrastructure, and a certification that there is a need for BRAC in 2005 and that annual net savings will be realized by each military department not later than fiscal year 2011. The legislation also stipulated that if the certification is provided in DOD’s submission to Congress, GAO is to prepare an evaluation of the force structure plan, the infrastructure inventory, the final selection criteria, and the need for an additional BRAC round, and to report to Congress not later than 60 days after the force structure plan and the infrastructure inventory are submitted to Congress. The 2002 legislation also required the Secretary of Defense to publish in the Federal Register the selection criteria proposed for use in the BRAC 2005 round and to provide an opportunity for public comment. The proposed selection criteria were published on December 23, 2003, with a public comment period ending January 30, 2004. The final criteria were published on February 12, 2004. Closing unneeded defense facilities has historically been difficult because of public concern about the economic effects of closures on communities and the perceived lack of impartiality in the decision-making process. Legislative restrictions effectively precluded bases from being closed between 1977 and 1988. However, legislation enacted in 1988 supported the tasking of a special commission chartered by the Secretary of Defense to identify bases for realignment and closure and provided relief from certain statutory provisions that had hindered DOD’s previous efforts. With this legislation, a base realignment and closure round was initiated in 1988. Congress later passed the Defense Base Closure and Realignment Act of 1990, which created an independent commission and authorized three BRAC rounds in 1991, 1993, and 1995. The four commissions generated 499 recommendations—97 major closures, and hundreds of smaller base realignments, closures, and other actions. However, DOD recognized at the time it was completing its recommendations for the 1995 BRAC round that excess infrastructure would remain after that round and that additional closures and realignments would be needed in the future. Subsequent Defense Science Board and Quadrennial Defense Review studies, and others, echoed the need for one or more future additional BRAC rounds, but congressional action to authorize a future BRAC round did not occur for several years, in part because of concerns over how some decisions were made in the 1995 BRAC round. Ultimately, the National Defense Authorization Act for Fiscal Year 2002 extended the authority of the Defense Base Closure and Realignment Act of 1990, authorizing another round of base realignments and closures in 2005. Some key requirements mandated by the 1990 act or procedures adopted by DOD in implementing it to ensure the fairness and objectivity of the base closing process include: All installations must be compared equally against selection criteria and a current force structure plan developed by the Secretary of Defense. Decisions to close military installations with authorization for at least 300 civilian personnel must be made under the BRAC process. Decisions to realign military installations authorized for at least 300 civilian personnel that involve a reduction of more than 1,000, or 50 percent or more of the civilian personnel authorized, also must undergo the BRAC process. Selection criteria for identifying candidates for closure and realignment must be made available for public comment before being finalized. All components must use specific models for assessing (1) the cost and savings associated with BRAC actions and (2) the potential economic impact on communities affected by those actions. Information used in the BRAC decision-making process must be certified-–that is, certified as accurate and complete to the best of the originator’s knowledge and belief. This requirement was designed to overcome concerns about the consistency and reliability of data used in the process. An independent commission is required to review DOD’s proposed closures and realignments and to finalize a list of proposed closures and realignments to be presented to the President and, subject to the President’s approval, to Congress. The BRAC Commission is required to hold public hearings. The BRAC process imposes specific time frames for completing specific portions of the process (see app. I for time frames related to the 2005 BRAC round). The President and Congress are required to accept or reject the Commission’s recommendations in their entirety. In addition to GAO’s role in monitoring the BRAC process, service audit agencies and DOD Inspector General (IG) personnel are extensively involved in auditing the process to better ensure the accuracy of data used in decision-making and enhance the overall integrity of the process. Our work in examining lessons learned from prior BRAC rounds found general agreement that the prior legislation and the framework it outlined served the process well, and general agreement that it should provide a useful framework for a future round. That is not to say that the previous process was perfect or entirely devoid of concerns over the role of politics in the process. As we have previously noted, we recognize that no public policy process, especially none as open as BRAC, can be completely removed from the U.S. political system. However, the elements of the process noted above provide several checks and balances to keep political influences to a minimum. That said, the success of these provisions requires that all participants of the process adhere to the rules and procedures. GAO has played a long-standing role in the BRAC process. As requested by congressional committees (1988 BRAC round) or mandated by law since 1990, we have served as an independent and objective observer of the BRAC process and have assessed and reported on DOD’s decision-making processes leading up to proposed realignment and closure recommendations in each of the four prior rounds. To make informed and timely assessments, we have consistently operated in a real-time setting since the 1991 BRAC round and have had access to significant portions of the process as it has evolved, thus affording the department an opportunity to address any concerns we raised on a timely basis. By mandate, our role in the BRAC 2005 round remains the same, and we have been observing the process since DOD began work on the 2005 round. Finally, I also want to recognize the important role played by the DOD Inspector General and the military services’ audit agencies in the BRAC process. GAO has been called upon to examine various issues associated with prior BRAC rounds, including the one held in 1988. The 1990 BRAC legislation, which governed the 1991, 1993, and 1995 rounds, specifically required that we provide the BRAC Commission and Congress a detailed analysis of the Secretary of Defense’s recommendations and selection process. Legislation authorizing the 2005 BRAC round retained the requirement for a GAO review of the Secretary’s recommendations and selection process, with a report to the congressional defense committees required no later than July 1, 2005, 45 days after the latest date by which the Secretary must transmit to the congressional defense committees and the BRAC Commission his recommendations for closures or realignments. The tight time frame under which we have to report our findings on the department’s BRAC selection process and recommendations necessitates that we have access to the BRAC decision-making processes as they are unfolding within DOD. During the past rounds, DOD and its components have granted us varying degrees of access to their processes a year or more in advance of the Secretary’s public release of his recommendations for closures and realignments. This has greatly facilitated our ability to monitor the process as it was unfolding and has provided us with opportunities to address issues and potential problem areas during the process. Furthermore, it has aided our ability to complete some detailed analysis of individual recommendations in the time available after the Secretary’s proposed closures and realignments were finalized and made public. We have been observing the 2005 BRAC process since DOD’s initial work began on the 2005 round. From our vantage point, we are looking to see to what extent DOD follows a clear, transparent, consistently applied process, one where we can see a logical flow between DOD’s analysis and its decision-making. Although we do not attend or participate in deliberative meetings involving BRAC, we are permitted access to the minutes of these meetings and to officials involved in the process. I also want to acknowledge the key roles played by the DOD Inspector General and service audit agencies to help ensure the accuracy of data used in BRAC decision-making. These agencies play a front-line role in checking the accuracy of data obtained in BRAC data calls, as well as verifying data entries and output pertaining to the cost and analytical models used as part of the BRAC process. They also identify and refer any errors to defense components on a real-time basis, to facilitate corrective actions. We coordinate regularly with these other audit agencies, and in selected instances we observe the work of these audit agencies in checking the data used as part of BRAC decision-making. Another part of our role involves assessing and reporting on the status of prior BRAC recommendations. These reports provide insights into the long and tedious process of transferring unneeded base property to other federal recipients and communities for future reuse. While the actual closures and realignments of military bases in the prior rounds were completed by 2001, the processes of environmental cleanup and property transfer continue today and will most likely continue for several more years. As of September 30, 2003, DOD data show that the department has transferred over 280,000 acres of unneeded property to other users but has about 220,000 acres that have yet to be transferred. While the progress of property transfer varies among the affected bases and is dependent upon a number of factors, our work has shown that environmental cleanup has long been a key factor in slowing the transfer process. We are currently in the process of updating our prior work on the implementation actions associated with the prior BRAC rounds. Our key BRAC reports, which can be accessed at www.gao.gov, are listed in appendix II of this statement. The legislation authorizing a BRAC round in 2005 also requires that DOD provide information on a number of BRAC-related issues in 2004, and that GAO report to Congress not later than 60 days after the department submits this information to Congress. Since DOD has published its selection criteria for the 2005 round, I can provide you with some observations in that area. We expect to complete our full assessment of other issues within 60 days of receiving DOD’s report. Therefore, I can make only preliminary and general observations about some of the issues, such as excess capacity, certification of annual net savings by 2011, and economic impact. The department’s final selection criteria essentially follow a framework similar to that employed in prior BRAC rounds, with specificity added in selected areas in response to requirements contained in the 2002 legislation. The 2002 legislation required that DOD give priority to military value and consider (1) the impact on joint warfighting, training, and readiness; (2) the availability and condition of training areas suitable for maneuver by ground, naval, or air forces throughout diverse climates and terrains, and staging areas for use by the armed forces in homeland defense missions; and (3) the ability to accommodate contingency, mobilization, and future force requirements. The legislation also required DOD to give consideration to other factors, many of which replicated criteria used in prior BRAC rounds. Further, the legislation required DOD to consider cost impacts to other federal entities as well as to DOD in its BRAC decision-making. Additionally, the National Defense Authorization Act for Fiscal Year 2004 required DOD to consider surge requirements in the 2005 BRAC process. Table 1 compares the 1995 BRAC criteria with that adopted for 2005, with changes highlighted in bold. Our analysis of lessons learned from prior BRAC rounds affirmed the soundness of these basic criteria and generally endorsed their retention for the future, while recognizing the potential for improving the process by which the criteria are used in decision-making. Adoption of these criteria adds an element of consistency and continuity in approach with those of the past three BRAC rounds. The full analytical sufficiency of the criteria will best be assessed through their application, as DOD completes its data collection and analysis. Notwithstanding our endorsement of the criterion framework, on January 27, 2004, we sent a letter to DOD that identified two areas where we believed the draft selection criteria needed greater clarification in order to fully address the special considerations called for in the 2002 legislation. Specifically, we noted that the criterion related to costs and savings did not indicate the department’s intention to consider potential costs to other DOD activities or federal agencies that may be affected by a proposed closure or realignment recommendation. Also, we noted that it was not clear to what extent the impact of costs related to potential environmental restoration, waste management, and environmental compliance activities would be included in cost and savings analyses of individual BRAC recommendations. We suggested that DOD could address our concerns by incorporating these considerations either directly, in its final criteria, or through later explanatory guidance. DOD decided to address our concerns through clarifying guidance. DOD faced a difficult task in responding to a congressional mandate that it report on excess capacity, without compromising the integrity of the 2005 BRAC process. In this regard, DOD opted to use a methodology that would give some indication of excess capacity but would not be directly linked to the capacity analysis being performed as part of the 2005 BRAC process. DOD officials indicated that they would build on the approach they used in their 1998 report to estimate excess base capacity and address other BRAC issues. In November 1998, we reported that DOD’s analysis gave only a rough indication of excess base capacity because it had a number of limitations. In addition, the methodology did not consider any additional excess capacity that might occur by looking at facilities or functions on a cross-service basis, a priority for the 2005 round. To estimate excess capacity in 1998, the military services and the Defense Logistics Agency (DLA) compared capacity for a sample of bases in 1989 with projected capacity for a sample of bases in 2003, after all scheduled BRAC actions were completed. The services and DLA categorized the bases according to their primary missions, and they defined indicators of capacity, or metrics, for each category. Varied metrics were used to depict capacity. For example, metrics included maneuver acres per brigade for Army training bases, square feet of parking apron space for active and reserve Air Force bases, or capacity direct labor hours as compared with budgeted or programmed direct labor hours for Navy aviation depots. DOD officials are building on this methodology to compare 1989 data with more recent data in order to estimate current excess capacity as a means of meeting their 2004 reporting requirement. That methodology, while providing an indication of excess capacity, has a number of limitations that make it difficult to be precise when trying to project a total amount of excess capacity across DOD. In addition to the factors already noted, GAO and the Congressional Budget Office previously reported that by using 1989 as a baseline DOD did not take into account the excess capacity that existed in that year, which was prior to the base closures of the prior four rounds. As a result, the percentage of excess increased capacity reported may be understated or overstated for functional areas considered. Furthermore, the Congressional Budget Office reported that the approach could understate the capacity required if some types of base support were truly a fixed cost, regardless of the size of the force. Another limitation of DOD’s methodology is that each installation could be counted only in one category even though it might have multiple functions. For example, an Air Force base that has a depot and a fighter wing could only be categorized in one functional area. While the prior BRAC rounds have focused solely on reducing excess capacity, DOD officials have stated this is not the sole focus of the 2005 BRAC round. DOD officials have noted that the 2005 round aims to further transform the military by rationalizing base infrastructure to the force structure, enhancing joint capabilities and seeking crosscutting solutions and alternatives for common business-oriented support functions, as well as eliminating excess capacity. A complete assessment of capacity and opportunities to reduce it must await the completion of DOD’s ongoing official analyses under BRAC 2005. Nevertheless, we believe sufficient indicators of excess capacity exist, as well as opportunities to otherwise achieve greater efficiencies in operations, to justify proceeding with the upcoming round. DOD financial data indicate that the department has generated net savings of about $17 billion through fiscal year 2001—the final year of the prior BRAC rounds—-and is accruing additional, annually recurring savings of about $7 billion thereafter. We have consistently affirmed our belief that the prior BRAC rounds have generated substantial net savings—primarily in the form of future cost avoidances—for the department. While these amounts are substantial, we have, at the same time, viewed these savings estimates as imprecise for a variety of reasons, such as weaknesses in DOD’s financial management systems that limit its ability to fully account for the costs of its operations; the fact that DOD’s accounting systems, like other accounting systems, are oriented to tracking expenses and disbursements, not savings; the exclusion of BRAC-related costs incurred by other agencies; and inadequate periodic updating of the savings estimates that are developed. DOD, in its 1998 report to Congress, indicated that it had plans to improve its savings estimates for the implementation of future BRAC rounds. We have also recommended that DOD improve its savings estimates for future BRAC rounds, such as the 2005 round. DOD has not yet acted on our recommendation, but DOD officials told us that they intend to implement a system to better track savings for the upcoming round. As required by the fiscal year 2002 legislation, DOD is required to certify for the upcoming 2005 BRAC round that it will achieve “annual net savings” for each military department by 2011. Using precise terminology is critical in statements regarding BRAC savings, because it can make a big difference in specifying when savings will actually occur and the nature of those savings. According to DOD officials, “annual net savings” essentially refer to the estimated savings that are generated from BRAC in a given year that are greater than the costs incurred to implement BRAC decisions in that same year. Another way of looking at net savings is to consider the point at which cumulative savings exceed the cumulative costs of implementing BRAC decisions over a period of years. Experience has shown that the department incurs significant upfront investment costs in the early years of a BRAC round, and it takes several years to fully offset those cumulative costs and begin to realize cumulative net savings. The difference in the terminology is important to understand because it has a direct bearing on the magnitude and assessment of the savings at any given time. For example, as shown in table 2, initial annual net savings reported by the department as a whole in the 1995 BRAC round did not begin to occur until fiscal year 2000, or the fifth year of implementation; in each of the prior years, the costs had exceeded the estimated savings. On the other hand, as shown in table 2, there were no cumulative net savings as of fiscal year 2001, the sixth and final year of BRAC implementation. Cumulative net savings did not occur in this case until fiscal year 2002, based on DOD’s data. DOD financial data suggest that, assuming conditions similar to those of the 1993 and 1995 rounds, annual net savings for each of the military departments for the 2005 round could be achieved by 2011—that is, by 2011 savings could exceed closure-related costs for that year. While we believe that the potential exists for significant savings to result from the 2005 BRAC round, we are not in a position to say conclusively at this point to what extent DOD will realize annual net savings by 2011. In addition to the imprecision of DOD’s data, there simply are too many unknowns at this time, such as the specific timing of individual closure or realignment actions that affect savings estimates and the implementation costs that may be required. The savings to be achieved depend on the circumstances of the various recommended closures and realignments as put forth by the 2005 BRAC Commission and on the implementation of those recommendations. Further, DOD has gone on record stating that the upcoming round is more than just an exercise of trimming its excess infrastructure. DOD is also seeking to maximize joint utilization and further its transformation efforts. To what extent these goals may affect savings is also unknown at this point. And finally, to what extent forces that are currently based overseas may be redeployed to the United States and what effect that redeployment may have on BRAC and subsequent savings remain unknown as well. Notwithstanding the issues we raise that could affect savings, and the point at which savings would exceed the costs associated with implementing recommendations from a 2005 BRAC round, we continue to believe that it is vitally important for DOD to improve its mechanisms for tracking and updating its savings estimates. DOD, in its 1998 report to Congress on BRAC issues, cited proposed efforts that, if adopted, could provide for greater accuracy in the estimates. Specifically, the department proposed to develop a questionnaire that each base affected by future BRAC rounds would complete annually during the 6-year implementation period. Those bases that are closing, realigning, or receiving forces because of BRAC would complete the questionnaire. DOD would request information on costs, personnel reductions, and changes in operating and military construction costs in order to provide greater insight into the savings created by each BRAC action. DOD suggested that development of such a questionnaire would be a cooperative effort involving the Office of the Secretary of Defense, the military departments, the defense agencies, the Office of the DOD Inspector General, and the service audit agencies. This proposal recognizes that better documentation and updating of savings will require special efforts parallel to the normal budget process. We strongly endorse such action. If the department does not take steps to improve its estimation of savings in the future, then previously existing questions about the reliability, accuracy, and completeness of DOD’s savings estimates will likely continue. We intend to examine DOD’s progress in instituting its proposed improvements during our review of the 2005 BRAC process. While the short-term impact can be very traumatic, several factors, such as the strength of the national and regional economies, play a role in determining the long-term economic impact of the base realignment or closure process on communities. Our work has shown that recovery for some communities remains a challenge, while other communities surrounding a base closure are faring better. Most are continuing to recover from the initial economic impact of a closure, allowing for some negative effect from the economic downturn in recent years. Our analysis of selected economic indicators has shown over time that the economies of BRAC-affected communities compare favorably with the overall U.S. economy. We used unemployment rates and real per capita income rates as broad indicators of the economic health of those communities where base closures occurred during the prior BRAC rounds. We identified 62 communities surrounding base realignments and closures from all four BRAC rounds for which government and contractor civilian job losses were estimated to be 300 or more. We previously reported that as of September 2001, of the 62 communities surrounding these major base closures, 44 (71 percent) had average unemployment rates lower than the (then) average 9-month national rate of 4.58 percent. We are currently updating our prior assessments of economic recovery, attempting to assess the impact of the recent economic downturn on affected BRAC communities we had previously surveyed. What we are seeing is that, in keeping with the economic downturn in recent years, the average unemployment rate in 2003 increased for 60 of the 62 communities since 2001. However, the year 2003 unemployment rate data indicated that the rates for many of these BRAC communities continue to compare favorably with the U.S. rate of 6.1 percent. That is, 43 (69 percent) of the communities had unemployment rates at or below the U.S. rate. As with unemployment rates, we had also previously reported that annual real per capita income growth rates for BRAC-affected communities compared favorably with national averages. From 1996 through 1999, 53 percent, or 33, of the 62 areas had an estimated average real per capita income growth rate that was at or above the average of 3.03 percent for the nation at that time. Data included in our 2002 report were the latest available at that time, recognizing time lags in data availability. Our recent analysis has also noted that changes in the average per capita income growth rate of affected communities over time compared favorably and were similar to corresponding changes at the national level. Our more recent analysis indicates that 30 of the 62 areas examined (48.4 percent), had average income growth rates higher than the average U.S. rate of 2.2 percent between 1999 and 2001, which represents a drop from the rate during the previous time period. We have previously reported on our discussions with various community leaders who felt the effects of base closures. These discussions identified a number of factors affecting economic recovery from base closures, including: robustness of the national economy, diversity of the local economy, regional economic trends, natural and labor resources, leadership and teamwork, public confidence, government assistance, and reuse of base property. If history is any indicator, these factors are likely to be equally applicable in dealing with the effects of closures and realignments under BRAC 2005. This concludes my statement. I would be pleased to answer any questions you or other Members of the Subcommittee may have at this time. For further information regarding this statement, please contact Barry W. Holman at (202) 512-8412. Individuals making key contributions to this statement include Paul Gvoth, Michael Kennedy, Warren Lowman, Tom Mahalek, David Mayfield, James Reifsnyder, Cheryl Weissman, and Dale Wineholt. U.S. General Accounting Office, Military Base Closures: Better Planning Needed for Future Reserve Enclaves (GAO-03-723, June 27, 2003). U.S. General Accounting Office, Military Base Closures: Progress in Completing Actions from Prior Realignments and Closures (GAO-02-433, Apr. 5, 2002). U.S. General Accounting Office, Military Base Closures: Overview of Economic Recovery, Property Transfer, and Environmental Cleanup (GAO-01-1054T, Aug. 28, 2001). U.S. General Accounting Office, Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial (GAO-01-971, July 31, 2001). U.S. General Accounting Office, Military Bases: Status of Prior Base Realignment and Closure Rounds (GAO/NSIAD-99-36, Dec. 11, 1998). U.S. General Accounting Office, Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure (GAO/NSIAD-99-17, Nov. 13, 1998). U.S. General Accounting Office, Military Bases: Lessons Learned From Prior Base Closure Rounds (GAO/NSIAD-97-151, July 25, 1997). U.S. General Accounting Office, Military Bases: Closure and Realignments Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). U.S. General Accounting Office, Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). U.S. General Accounting Office, Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). U.S. General Accounting Office, Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments (GAO/NSIAD- 91-224, May 15, 1991). U.S. General Accounting Office, Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations (GAO/NSIAD-90-42, Nov. 29, 1989). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Defense Authorization Act for Fiscal Year 2002 authorized an additional Base Realignment and Closure (BRAC) round in 2005. The legislation requires the Department of Defense (DOD) to provide Congress in early 2004 with a report that addresses excess infrastructure and certifies that an additional BRAC round is needed and that annual net savings will be realized by each military department not later than fiscal year 2011. GAO is required to assess this information as well as the selection criteria for the 2005 round and report to Congress within 60 days of DOD's submission. The legislation also retains the requirement for GAO to assess the BRAC 2005 decisionmaking process and resulting recommendations. This testimony addresses (1) the BRAC process from a historical perspective, (2) GAO's role in the process, and (3) GAO's initial observations on key issues DOD is required to address in preparation for the 2005 round. Because DOD had not submitted its required 2004 report at the time we completed this statement, this testimony relies on our prior work that addressed issues associated with excess capacity and BRAC savings. GAO's work in examining lessons learned from prior BRAC rounds found that the prior legislation and the framework it outlined served the process well, and that it should provide a useful framework for a future round. Furthermore, the legislation and its implementation provided for checks and balances to ensure the integrity of the process. GAO has played a long-standing role as an independent and objective observer of the BRAC process. GAO has operated in a real-time setting and has had access to significant portions of the process as it has evolved, thus affording DOD an early opportunity to address any concerns GAO might identify. GAO's role in the 2005 round remains the same, and GAO has been observing the process since DOD began work on the 2005 round. Timely access to DOD data is key to fulfilling GAO's role. GAO's initial observations on key issues DOD is required to address in its 2004 report are as follows. The selection criteria for the 2005 round are basically sound and provide a good framework for assessing alternatives. Nevertheless, GAO provided DOD with comments on the draft criteria that focused on the need for clarification of how DOD intends to consider total costs to DOD and other federal agencies and environmental costs in its analyses. The department has indicated that it would be issuing clarifying guidance. DOD plans to estimate its excess capacity using a methodology that it used in 1998 for similar purposes. While this methodology provides a rough indication of excess capacity for selected functional areas, it has a number of limitations that create imprecision when trying to project a total amount of excess capacity across DOD. A more complete assessment of capacity and the potential to reduce it must await the results of the current BRAC analyses being conducted by DOD during the 2005 round. DOD financial data suggest that, assuming conditions similar to those in the 1993 and 1995 BRAC rounds, each military department could achieve annual net savings by 2011. While we believe that the potential exists for significant savings to result from the 2005 round, there are simply too many unknowns at this time to say conclusively to what extent annual net savings will be realized by 2011. For example, in 2005 DOD is placing increased emphasis on jointness and transformation and is likely to use BRAC to incorporate any force redeployments from overseas locations that may result from ongoing overseas basing reassessments. This suggests a need for caution in projecting the timing and amount of savings from a new BRAC round.
DOD’s DTM 09-007 established business rules, or standard procedures, to estimate and compare the full cost of military and DOD civilian personnel and contractor support. These rules were incorporated with amendments into DOD Instruction 7041.04, which supersedes the DTM. According to the DTM and the instruction, when developing national security policies and making program commitments, DOD officials must be aware of the full costs of personnel and have a thorough understanding of the implications of those costs to DOD and, on a broader scale, the federal government. To facilitate this awareness, the DTM provided and the instruction provides business rules for DOD officials to estimate the full costs of the defense workforce and contracted support for tasks supporting planning, defense acquisition, and force structure. According to the instruction, the Office of the Secretary of Defense (OSD) and all DOD components are required to use these business rules when performing an economic analysis to support workforce decisions, such as determining the workforce mix of new or expanding mission requirements that are not inherently governmental or exempt from private-sector performance, and in-sourcing. Table 1 shows how the DOD components can generally use the business rules established in DTM 09-007 and in DOD Instruction 7041.04 to estimate and compare personnel costs and to support workforce mix decisions. DOD Instruction 7041.04 does not require DOD’s components to make workforce decisions based on cost alone, but it does require the components to consider cost in the decision making process when the function in question is not required by law, regulation, or policy to be performed by a certain workforce (e.g., inherently governmental or military essential functions) and other workforce factors are equal. In other cases, the cost of using personnel to perform work may not be part of the decision making process. For example, a June 2013 memorandum from the Assistant Secretary of Defense states that during civilian furloughs the use of either military personnel or contractors to compensate for workload resulting from the civilian furloughs is prohibited. DOD Instruction 7041.04 states that the full costs of personnel include labor costs, current and deferred compensation costs paid in cash and in- kind, as well as direct and indirect non-labor costs. For contractor support, the full cost is the sum of the service contract, the cost of goods, services and benefits provided in-kind to contractors or reimbursed by DOD, and the costs to DOD in support of the contract and contract administration. See table 2 for a description of the full cost of performance by military and civilian personnel, and contractor support, as defined in DOD Instruction 7041.04. Further, the Federal Accounting Standards Advisory Board Handbook defines full cost as the total amount of resources used to produce the output. More specifically, the full cost of an output is the sum of (1) the costs of resources consumed directly or indirectly that contribute to the output, and (2) the costs of identifiable supporting services provided by units within the reporting entity and by other reporting entities. The Director of CAPE is the principal advisor to the Secretary of Defense and other senior officials in DOD for independent cost assessment, program evaluation, and analysis. DOD Instruction 7041.04 states that CAPE, in collaboration with USD(P&R), and the Under Secretary of Defense (Comptroller), is responsible for developing a cost model for DOD-wide application to implement the business rules identified in the instruction. The instruction also states that CAPE, USD(P&R), the Under Secretary of Defense (Comptroller), and the heads of the DOD components (such as the heads of the military departments) and OSD components, are responsible for using the business rules identified in the instruction. This includes using the business rules to estimate the full costs of the defense workforce in support of planning, defense acquisition, force structure decisions, and when performing an economic analysis in support of workforce mix decisions. The DOD Office of the Actuary provides actuarial expertise on all matters relating to military compensation and benefits, including performing annual valuations of the military retirement system, education benefits under the Montgomery G. I. Bill, and health care for the military retired population. We have previously made recommendations to DOD to develop a methodology to estimate the full cost of military and civilian personnel. In our May 2006 report, we found that from fiscal years 2005 through 2007, the Air Force, the Army, and the Navy collectively converted or planned to convert a total of 5,507 military health care positions to civilian positions.We found at that time that it was unknown whether these conversions would increase or decrease costs to DOD, primarily because the methodology each of the departments considered using did not include the full cost of military personnel. Accordingly, we recommended, and DOD generally agreed, that the Secretaries of the Air Force, the Army, and the Navy coordinate with CAPE to develop the full cost for military personnel and for federal civilian or contract replacement personnel in assessing whether anticipated costs to hire civilian replacement personnel will increase costs to DOD for defense health care. In response to this recommendation, DOD issued DTM 09-007 in January 2010, which provided a full cost methodology for assessing military, civilian, or contractor support personnel costs to inform workforce mix decisions, including military to civilian conversions. In our February 2008 report on DOD’s efforts to address legislative requirements to use a full cost methodology to certify and report on planned conversions of military medical and dental positions to civilian medical and dental positions, we found that the Navy’s methodology was the only one that addressed the specific factors identified by the John Warner National Defense Authorization Act for Fiscal Year 2007 for positions planned for conversion for fiscal years 2007 and 2008. The Air Force and the Army relied on composite military rates, instead of using a full cost methodology. These composite rates did not include all of the required cost factors, such as training and recruiting costs. Accordingly, we recommended that DOD, among other things, develop operating guidance for the military departments to use when justifying future conversions of military medical and dental positions to civilian positions. In our recommendation we stated that this guidance should stipulate requirements to use a consistent full cost methodology for comparing the cost of military and civilian personnel. Officials in CAPE attributed the development of DTM 09-007, in part, to these recommendations. DOD Instruction 7041.04 reflects improvements to DOD’s methodology for estimating and comparing the full cost to the taxpayer of work performed by military and civilian personnel and contractor support since the initial issuance of DTM 09-007, but the instruction is still limited in certain areas. For example, DOD’s instruction provides limited guidance on estimating overhead costs and adjusting advertising and recruiting, and training costs. In addition, CAPE has not established business rules for estimating the cost of a part of DOD’s total workforce—Reserve and National Guard personnel. Further, CAPE has not yet evaluated certain retirement and retiree health benefit cost elements that it is using to reflect the full cost of currently employed military and civilian personnel. DOD Instruction 7041.04 reflects a number of improvements in estimating certain cost elements in comparison to DTM 09-007, addressing some of the limitations users of the DTM and the interested parties we met with identified. While the DTM did not identify a responsible office for preparing clarifying guidance to assist users in applying the methodology, the instruction establishes that CAPE, among its other responsibilities, will be responsible for issuing such guidance. Also, CAPE has expanded the methodology to address specific elements that users of the DTM previously identified as missing, such as the cost of foregone taxes, lost productivity during periods of transition, and some other non-common costs associated with converting from contract to government performance. Lastly, CAPE has been developing, refining, and testing a DOD-wide software tool—the Full Cost of Manpower—that employs the business rules established in the instruction, and the instruction provides a link to this tool. However, while CAPE has addressed several of the limitations users of the preceding DTM and other interested parties identified, certain limitations still exist. DOD’s instruction provides limited direction on estimating general and administrative and overhead costs, and adjusting advertising and recruiting, and training costs. DOD Instruction 7041.04 states that the cost elements contained in the instruction can be modified or augmented in each specific case as necessary, but that the DOD components should be prepared to support each decision with sufficient justification. Best practices as reflected in GAO’s Cost Estimating and Assessment Guide state, however, that establishing ground rules for cost estimating provides a common set of agreed on estimating standards that provide guidance and minimize conflicts in definitions. The instruction directs users to include general and administrative and overhead costs in cost estimates. These costs include the goods, services, and benefits that support more than one organization. More specifically, these costs include a share of supplies and facilities and professional support services, and a fair share of the recurring costs of higher-level management and professional support services for organizations that produce or provide more than one product or service. The instruction provides a list of direct included for military and civilian personnel, but does not provide data sources or guidance on how to estimate them. Instead, CAPE identified subject-matter experts in each of the military departments to serve as points of contact to address these types of questions and provide assistance in the future. DODI 7041.04 defines direct costs as costs that are related directly to the production of a specific good or the performance of a specific service. Typical direct costs are the compensation of employees for performance of work, fringe benefit costs, and the costs of materials consumed or expended in the performance of the work. to contractors through a public-private competition using OMB Circular A-76. However, both the DOD Inspector General and we have found that the standard rate of 12 percent of labor costs does not have a sound analytical basis, which leaves some uncertainty about whether the rate may be understated or overstated. Our past work acknowledged the difficulty of obtaining reliable cost data that could provide a sound basis for an overhead rate, but we concluded that until actual overhead costs are used to develop a more meaningful standard overhead rate, the magnitude of savings expected from public-private competitions will be imprecise and competition decisions could continue to be controversial. End strength represents the actual number of personnel on board at the end of a fiscal year. advertising and recruiting costs and added that some functions or specialties would cost more to recruit and train. Specifically, the users told us that, for some specialties, the average training rates are too low, while for other specialties, the rates are too high. The users also said these costs vary depending on the rank or grade of the personnel, and depending on whether the specialty requires training or certifications. For example, the average rate for training in the Army for fiscal year 2012 was $6,490 per servicemember. However, according to Army data, the amortized cost of training for an officer with a general aviation area of concentration can range from about $6,500 to $93,600 a year, depending on rank. In contrast, the amortized cost of training an enlisted member with an infantryman area of concentration can range from about $4,600 to $8,000 a year depending on pay grade. Due to the lack of explanatory guidance on these cost elements, users of the preceding DTM told us they have developed their own methods for estimating and adjusting these costs, or have not included the costs in their estimates. Some officials have requested more developed guidance on these cost elements, but CAPE did not provide more specific direction in its recently issued instruction. Rather, as previously mentioned, CAPE identified subject-matter experts in each of the military departments to serve as points of contact to address these types of questions. However, without more developed guidance that establishes a clear set of ground rules or standards, subject-matter experts in the military departments and cost estimators must make their own assumptions, which can lead to inconsistent estimates, as well as hinder DOD’s and congressional decision makers’ visibility over the costs of accomplishing work by the different workforces. Although Reserve and National Guard personnel are a significant part of the military workforce, comprising about 38 percent of the military workforce end strength in fiscal year 2012, CAPE did not establish business rules for developing cost estimates for these personnel in its instruction. The Federal Accounting Standards Advisory Board Handbook states that a cost methodology that captures full costs should include any resources directly or indirectly used to perform work. Furthermore, the instruction states that the DOD components should use the business rules to account for the full cost of personnel when developing independent cost estimates and pricing units in the force structure. Military service officials told us that they currently use or are considering using Reserve and National Guard personnel to perform work, and that in the absence of business rules for estimating the cost of these personnel, some officials said they have generated cost estimates for these workforces using their own methods. For example, one of the Air Force commands we interviewed conducted a business case analysis to evaluate the pros and cons associated with alternative workforce structures to help them meet their requirements. This analysis included the consideration of three courses of action: (1) adding additional contractor support, (2) adding additional civilian and active duty military personnel or (3) using reservists to provide quick surge capacity. The command officials said they used the methodology in the preceding DTM to conduct part of the analysis, and used Air Force guidance for the reserve component. DOD’s military workforce consists of both active military and Reserve and National Guard personnel, and we have previously reported the importance of DOD employing a strategic approach to managing its total workforce to achieve its missions.Reserve Forces Policy Board report to the Secretary of Defense recommended that CAPE establish DOD policy or guidance for calculating the full cost of the reserve components. The report concluded In addition, in January 2013, a that without such a policy, senior leaders within DOD will not have complete or uniform data on the total costs associated with active and reserve personnel to make informed workforce decisions. According to officials in CAPE, determining the best person–civilian or military–to fill a full-time position is a far different question than determining the best mix of active and reserve personnel, and that they will work to develop additional guidance and tools as needed. CAPE added that the Board’s singular focus on cost obscured the fact that many other factors such as peacetime and wartime demands, deployment frequency and duration, and unit readiness are of equal or greater importance. An official in the Office of the Assistant Secretary of Defense for Reserve Affairs, however, stated that the reserve components should be included in the instruction in order for DOD to make effective workforce decisions for its total workforce. Accordingly, CAPE added “active duty” to the title of instruction 7041.04 to be clear that the instruction did not include discussion of Reserve and National Guard personnel costs. to provide engineer units to assist local and state agencies in, among other things, construction of roads, bridges, and emergency housing. Without establishing business rules for estimating the cost of Reserve and National Guard personnel, however, whether as part of the instruction or in some other venue, DOD cannot create estimates and comparisons to inform workforce mix and force structure decisions for its total workforce. The instruction directs users to include payments the government makes into funds to support the retirement and health benefits that current military and civilian personnel who become eligible for retirement will receive upon retirement; however, a portion of these payments may not be relevant to current personnel. Retirement and health benefits are deferred benefits that current employees will receive in the future. These deferred benefits, regardless of when the employee receives them, are attributable to current service by an employee and are to be included in considering the full costs of a workforce. According to the Federal Accounting Standards Advisory Board Handbook, however, costs that are attributable to services rendered in prior years that are attributable to services previously rendered by current retirees and to past service of current active civilian and military personnel are sunk costs that should not be included in determining current costs. Accordingly, the instruction appropriately includes several cost elements that capture the actuarial “normal cost” to DOD of future retirement and health benefits for military and civilian personnel—that is, the costs of such future benefits that are attributable to a current employee’s work in a In addition, the instruction includes several cost elements given year. that represent costs to other federal agencies for deferred benefits. For contractor support, the cost of retirement and health care are included in the cost of the contract. See table 3 for a list of cost elements in the instruction related to retirement, health, and veteran benefits for military and civilian personnel. The “normal cost” is the actuarially determined cost of future benefits attributable to employee service during the fiscal year. The DOD components we examined, for the most part, including commands, offices, and defense agencies, reported to us that they have incorporated the business rules identified in the DTM and instruction into their workforce mix decisions. Still, implementation challenges exist. Some of the DOD components we examined used the business rules for estimating and comparing the full cost of DOD personnel and contract support to inform workforce mix decisions, while others reported they have had limited opportunities to use the business rules. However, some military service officials said they did not know the extent to which officials at more local levels were aware of the DTM or used it, and therefore their reported instances of using the business rules may underestimate the actual number of instances. In addition, at the time when we met with these organizations, CAPE had not completed development of a DOD- wide software tool for implementing the instruction and, in the meantime, the components we examined had developed their own cost tools to apply the business rules and develop cost estimates. Further, the instruction that replaced DTM 09-007 directs users to data that may not contain the most accurate information for determining contractor support costs. While the DOD components we examined generally reported to us that they have incorporated the business rules identified in the preceding DTM and current instruction to create cost estimates in support of workforce mix decisions, their opportunities to use the business rules have been limited. Among the 13 DOD components we included in our scope, 8 reported that they used the preceding DTM and provided documentation showing that cost was a factor in their decision making from January 2010 through June 2013, 4 told us that they had not had the opportunity to use the DTM but were aware of it, and 1, while it also had not had the opportunity to use the DTM, reported that it was not aware of it. According to the DTM and instruction, the policies and procedures established within it are applicable to all DOD components. Specifically, the DTM stated and the instruction states that, when developing national security policies and making program commitments, DOD officials must be aware of the full costs of personnel and have a thorough understanding of the implications of those costs to DOD and, on a broader scale, to the federal government. All eight of the DOD components we examined that reported that they had used the DTM to inform workforce mix decisions were commands within the military services, and a majority of their cost estimates supported in-sourcing decisions—moving work performed by contractors to performance by DOD employees. For example, of the 649 instances in which the commands we examined reported using the DTM from January 2010 through June 2013, 639 or 98 percent were to inform in-sourcing decisions. Of those 639, the Air Force Materiel Command accounted for approximately 525 or 82 percent. These commands provided us with documentation such as decision memoranda, completed cost benefit analysis spread sheets, and briefing slides given to senior leadership showing that cost was a factor in their decision-making. Further, with the exception of one instance in the Naval Supply Systems Command, officials from DOD components we met told us that, since January 2010, they have converted neither military to civilian positions nor civilian to military positions. The Army Installation Management Command and the Air Force Materiel Command reported a few instances of new or expanding missions that required the use of the DTM. Table 4 shows the number of times the commands we examined used the DTM and for what reason. Some DOD officials said they were not clear on uses of the DTM beyond in-sourcing, and others said their reported instances of using the business rules may underestimate the actual number of reported instances. Officials within OSD and headquarters elements said they did not know the extent to which officials throughout the department are aware of the instruction and the requirement to use the associated business rules. For example, it was not clear to some Army officials in one office that the business rules were to be used for costing out decisions beyond in- sourcing. Other officials noted, however, that the business rules identified in the DTM or references to the DTM have been incorporated into some service-level policies and procedures to support decisions other than in- sourcing. For example, the Army’s approval form for new service contracts includes a question asking if the cost of labor for new services contracts was determined using the business rules. Some military service officials said they did not know the extent to which the reported data represented the full degree to which organizations were, in fact, using the business rules, and therefore their reported instances of using the business rules may underestimate the actual number of instances. For example, one official acknowledged to us that it is difficult for his organization to identify the degree to which the business rules were being applied at the local levels. Officials within the agencies and offices that were aware of the DTM but had not used it told us of several reasons why they did not conduct these types of workforce cost estimates since the initial issuance of the DTM. For example, officials said they had not used it, in part, due to issues related to the current fiscal environment, such as concerns about anticipated reductions in funds available for contractor support and limitations on the number of civilian full-time equivalents. Officials also said they did not use the DTM because their offices do not have direct access to military personnel and have had no new or expanded missions. CAPE recently completed development of a DOD-wide software tool for implementing DOD Instruction 7041.04. During the time we met with component organizations, however, the DOD-wide tool was not available, and in the absence of the tool the components we examined had developed their own tools to apply the business rules in the DTM to develop workforce cost estimates. When initially released on January 29, 2010, DTM 09-007 called for CAPE, within one year of its publication, to develop a cost model for DOD-wide application that employs the business rules set forth in the DTM. In addition, best practices in GAO’s Cost Estimating and Assessment Guide state that in order to be reliable, cost estimates should be derived using an overall process that produces high- quality cost estimates that are comprehensive and accurate and that can be easily and clearly traced, replicated, and updated. According to officials in CAPE, they contracted out the development of a software tool for the required cost model to implement the business rules and, when DOD issued Instruction 7041.04 on July 3, 2013, CAPE also released its DOD-wide tool for use across the department. Officials in CAPE told us that the components’ use of the DOD-wide tool will not be required, enforced, or monitored. DTM 09-007 stated that CAPE would oversee compliance with both the DTM and the use of the DOD-wide tool. When DOD Instruction 7041.04 was issued, however, the requirement for CAPE to oversee use of its tool was removed. In the absence of CAPE’s DOD-wide tool, the Air Force modified an existing software tool originally used to inform public-private competition decisions, and the Army, the Marine Corps, and the Navy developed tools using an existing off-the-shelf software program. While the services developed and use different tools, those tools generally incorporate the cost elements identified in the DTM and current instruction. The defense agencies we met with said they had not created tools to implement the business rules, and officials from these agencies said they would use the DOD-wide tool when it was made available to them. According to the instruction, the business rules provide a consistent approach for all DOD components to estimate the cost of personnel. Accordingly, the instruction provides a list of potential cost factors associated with personnel that should be considered in the decision- making process even when personnel costs are not the only factor. Officials in CAPE stated, however, that they have not reviewed the services’ tools to ensure they are in compliance with the business rules and do not plan to review them. An assessment of these various tools would enable CAPE to identify the advantages and disadvantages of allowing multiple cost estimation tools. DOD decision makers who then use these various economic analyses or cost estimates would have greater assurance that they are using reliable results to make workforce mix decisions. When estimating contractor support costs for a new or expanding mission, Instruction 7041.04 provides several options for users to consider, although these options may not be the most accurate data sources. For example, the instruction directs the user to begin with the negotiated price of an existing contract, which DOD and military department officials told us is their preferred option. If an existing contract is not available, those officials attempt to find another contract that is similar to the contactor support for which costs are being estimated. To facilitate this, most of the services, agencies, and offices we met with maintain their own database of historical contractor data. If an existing contract is not available, the instruction directs the user to either a General Services Administration (GSA) website or the Army’s online database for contract management and reporting to obtain contractor support costs for services. We have previously highlighted limitations with GSA’s data, however, such as the fact that contractors’ published rates on the website do not reflect post-competition prices.The GSA website allows users to search for services, and then identifies a list of contractors that provide the service along with their rates. The listed rates, however, may be negotiated with the contractor. Further, DOD officials we spoke to during this review said that GSA’s website does not provide targeted data, such as actual contractor rates by function or geographic location. For example, officials with one command we spoke to noted their use of a contractor who possesses unique specialized nuclear weapons related knowledge. According to those officials, GSA’s website does not contain available data for unique positions like these. In addition, officials said that GSA’s website provides too large a range of rates for them to develop realistic estimates. Best practices from GAO’s Cost Estimating and Assessment Guide state that a basic characteristic of credible cost estimates is having multiple sources of suitable, relevant, and available data. In July 2013, when the DOD Instruction replacing DTM 09-007 was issued, the Army’s Contractor Manpower Reporting Application was added as an additional source for contractor data. The Contractor Manpower Reporting Application is an online database that automates the Army’s contract management and reporting process for contract management personnel by allowing users to view contract information, track contract data, and view various reports based on contract data in the application. The Contractor Manpower Reporting Application business process captures information on funding source, contracting vehicle, organization supported, mission and function performed, labor hours, and labor costs for contracted efforts, among other things. Currently the application collects data for the Army, the Air Force, and the Navy on the number of contractor employees by using direct labor hours and associated cost data. However, DOD is still developing its department- wide Contractor Manpower Reporting Application system and we previously found that a number of factors limit the accuracy and completeness of inventory data.unaware of some of the limitations with the contractor support data sources provided in the instruction. In the absence of data sources that are consistent with established practices for developing cost estimates, DOD components may be using data that do not lead to credible contractor cost estimates. With a total workforce of about 3 million individuals, as well as an estimated 710,000 contractor full time equivalents, DOD’s investment in personnel to accomplish its missions constitutes a substantial financial commitment. In those cases where the department can choose what workforce it wants to utilize to accomplish tasks, it is crucial that it have an accurate method for comparing the costs of its civilian, military, and contractor personnel. DOD has taken important steps to achieve this goal since our 2008 report on military to civilian conversions for health care personnel, including the development of DTM 09-007 and its successor Instruction 7041.04. The department’s current direction for estimating and comparing the cost of its available workforces, however, could be improved. For instance, users currently find it difficult to develop estimates of particular costs such as overhead, advertising and recruiting, and training, and the services have not received guidance on developing estimates for Reserve and National Guard personnel. Further, as we have noted, there is some disagreement within the department about the inclusion of certain retirement costs in current workforce cost estimates. More comprehensive guidance in these areas could improve the components’ ability to make accurate cost comparisons between workforces. Similarly, the department currently lacks assurance that workforce cost estimates and comparisons are consistent across the department, but by evaluating the different cost estimation tools currently being used by the components, it could decide on the best course to ensure consistency and accuracy. Also, current and accurate data on contractor support costs is critical for the department in making workforce decisions, but due to limitations in some of the data DOD has identified for making contractor support cost estimates, components may be relying on data sources that do not produce intended results. While federal agencies always must make consistent and cost-effective choices in managing their resources, this is especially true given the ongoing fiscal challenges that have imposed budgetary constraints across the federal government and are likely to continue for some time. To improve DOD’s estimates and comparisons of the full cost of its military, civilian, and contractor workforces, we are making the following five recommendations to the Secretary of Defense. To improve DOD’s methodology for estimating and comparing the full cost of its various workforces, we recommend that the Secretary of Defense direct the Office of Cost Assessment and Program Evaluation office to take the following three actions: Further develop guidance for cost elements that users have identified as challenging to calculate, such as general and administrative, overhead, advertising and recruiting, and training; Develop business rules for estimating the full cost of National Guard and Reserve personnel; and In coordination with the department’s Office of the Actuary and appropriate federal actuarial offices, reevaluate the inclusion and quantification of pension, retiree health care costs, and other relevant costs of an actuarial nature and make revisions as appropriate. To facilitate consistent workforce cost estimates and comparisons, we recommend that the Secretary of Defense direct the Office of Cost Assessment and Program Evaluation to assess the advantages and disadvantages of allowing the continued use of different cost estimation tools across the department or directing department-wide application of one tool, and revise its guidance in accordance with the findings of its analysis. To improve DOD’s ability to estimate contractor support costs, we recommend that the Secretary of Defense direct the Office of Cost Assessment and Program Evaluation, consistent with established practices for developing credible cost estimates, to research the data sources it is currently using and reassess its contractor support data sources for use when determining contractor support costs. In written comments on a draft of this report, DOD agreed with our findings that it must make cost-effective decisions in structuring and shaping its workforce of military personnel, government civilians, and contracted support. Specifically, DOD concurred with two of our recommendations and partially concurred with three. DOD’s comments are reprinted in appendix II. DOD also provided technical comments on the draft report, which we incorporated as appropriate. DOD partially concurred with our recommendation to further develop guidance for the cost elements that users have identified as challenging to calculate, such as general and administrative, overhead, advertising and recruiting, and training. In commenting on our report, DOD stated that it will continue to review the methodology for calculating these cost elements and issue clarifying guidance where necessary or appropriate. DOD also stated that it is part of an inter-agency effort that is developing government-wide cost comparisons and that, if or when government-wide cost comparison guidance is published, DOD will adjust its own guidance accordingly. We continue to believe that fully addressing this recommendation would enhance the development of DOD’s methodology for estimating and comparing the cost of its workforces. DOD also partially concurred with our recommendation to develop business rules for estimating the full cost of National Guard and Reserve personnel. In its comments, DOD stated that it is assessing the potential need for reserve manpower costing models, as well as the identification of questions that reserve costing policies and models would be used to address. Once it has gained a more thorough understanding of the questions to be addressed by reserve cost estimates, it will begin work on guidance as necessary, and development of reserve component costing models if appropriate. We continue to believe that fully addressing this recommendation would enhance the development of DOD’s methodology for estimating and comparing the cost of its workforces and DOD’s ability to make more informed workforce decisions. DOD concurred with our recommendation to coordinate with the department’s Office of the Actuary and appropriate federal actuarial offices, reevaluate the inclusion and quantification of pension, retiree health care costs, and other relevant costs of an actuarial nature and make revisions as appropriate. In its response to our report, DOD stated that it will work with the Office of the Actuary, and others as necessary, to reevaluate the inclusion and quantification of these costs elements and, following evaluation, revisions will be made. We believe such actions, if implemented effectively, will fully address the intent of the recommendation. DOD partially concurred with our recommendation to assess the advantages and disadvantages of allowing the continued use of different cost estimation tools across the department or directing department-wide application of one tool, and revise its guidance in accordance with the findings of its analysis. We acknowledge, as DOD stated in comments on our report, that the department is open to assessing the advantages and disadvantages of allowing the continued use of alternate cost estimation tools among the various components, and, if necessary, will revise its guidance based on this assessment. However, in order to satisfy the intent of this recommendation, DOD needs to take action to fully address this recommendation. Doing so will facilitate more consistent workforce cost estimates and comparisons when DOD’s components use the department’s methodology for estimating and comparing the cost of their workforces. Finally, DOD concurred with our recommendation to research the data sources it is currently using and reassess its contractor support data sources for use when determining contractor support costs. In its comments, DOD stated that as the department increases its fidelity into contractor support costs through the collection of statutorily required information via the Enterprise-wide Contractor Manpower Reporting Application in support of the Inventory of Contracts for Services, the department will modify its guidance accordingly. We believe that this action, if implemented effectively, will address the intent of the recommendation. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Director of DOD’s Office of Cost Assessment and Program Evaluation, the Office of Management and Budget, and appropriate congressional committees. In addition, this report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for the Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To evaluate the extent to which the Department of Defense’s (DOD) methodology for estimating and comparing the cost of work performed by military, civilian, and contractors reflects the full cost to the taxpayer, we identified each of the cost elements contained in the most recent version of Directive Type-Memorandum (DTM) 09-007 and DOD Instruction 7041.04 and compared them to best practices in GAO’s Cost Estimating and Assessment Guide, guidance in the Federal Accounting Standards Advisory Board Handbook, and the Office of Management and Budget guidelines for Performance of Commercial Activities. We also reviewed pertinent DOD and service-specific policies and guidance related to workforce mix and economic analysis. In addition, we interviewed knowledgeable officials within the offices of the Under Secretary of Defense (Personnel and Readiness), the Director of Cost Assessment and Program Evaluation, DOD’s Office of the Actuary, and the military services. We also met with the Office of Management and Budget and other experts and other interested parties we selected based on their work on issues related to the DTM. Specifically, we met with the Center for Strategic and International Studies and the Center for Strategic and Budgetary Assessments, both of which are independent nonprofit policy research institutes. We also met with the American Federation of Government Employees, a federal government employee union, and the Professional Services Council, a trade association of the government professional and technical services industry. We obtained the perspective of the National Guard and Reserve community by meeting with the Office of the Secretary of Defense (Reserve Affairs) and the Reserve Forces Policy Board. We also reviewed published work by independent research institutes evaluating the business rules contained in the preceding DTM, such as those issued by the Center for Strategic and International Studies, the Project on Government Oversight, and the Reserve Forces Policy Board. DOD Instruction 7041.04 was issued on July 3, 2013. Therefore, for a majority of our review, DTM 09-007 was the most current guidance. Corps is reported to the Defense Manpower Data Center, we were unable to align numbers of civilian personnel with specific Marine Corps commands or offices. Therefore, the Marine Corps was excluded from our selection criteria and we did not select any Marine Corps commands as part of our non-probability sample. Once the data for the other components was compiled, we rank ordered the commands within each of the military departments and the defense agencies from largest to smallest based on civilian personnel counts. For each of the military departments and defense agencies, we divided the rank ordered list of commands and agencies into three groups: large, medium, and small. We considered commands and agencies with 10,000 or more full-time civilian employees as large, commands and agencies with at least 1,000 but less than 10,000 full-time civilian employees as medium; and commands and agencies with less than 1,000 full-time civilian employees as small. We generated a selection number based on the total number of commands and agencies and counted down the rank ordered list of commands and agencies to identify those commands and agencies that we would meet with. This method resulted in one selected command, office, or agency from each of the categories of large, medium, and small. In addition to discussing with each of these entities their implementation of the business rules identified in DTM 09-007, we requested the number of times they used the business rules to inform workforce decisions (e.g., in- sourcing, workforce conversions, and new or expanded missions) from January 2010 when the DTM was issued to June 2013. We did not independently validate the number of uses of the business rules reported to us by each of the entities or the cost estimates used to inform the workforce decisions. Table 5 shows the commands and agencies we met with to determine the extent to which DOD incorporated the business rules contained in DTM 09-007. In addition, we met with officials in other DOD entities that are involved in guiding the implementation of the methodology. Although the Marine Corps was not part of our non-probability sample, we met with several Marine Corps commands and offices to discuss their implementation of the business rules identified in DTM 09-007. We also received a consolidated Marine Corps response to our questions on the implementation of the methodology. Table 6 shows the other DOD organizations we contacted to discuss the department’s efforts to implement the business rules contained in the preceding DTM. Further, we received a demonstration by the Office of Cost Assessment and Program Evaluation of its DOD-wide Full Cost of Manpower tool to gain an understanding of DOD’s application of the business rules contained in DTM 09-007. We also requested and obtained copies of the tools from each of the commands, offices, and defense agencies we met with to develop an understanding of the software tools they developed to apply the business rules in the DTM and in the instruction. For the Air Force we obtained a copy of their software tool, DTM-COMPARE. For the Army, the Marine Corps, and Navy we obtained copies of the off-the-shelf software that was programmed to implement the business rules contained in the preceding DTM. In addition, we attended an Army Installation Management Command training on their Cost Analysis Workbook tool. Further, we obtained and reviewed sample documentation from the military commands that had applied the business rules identified in the preceding DTM to support workforce mix decisions. We conducted this performance audit from December 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brenda S. Farrell, (202) 512-3604 or [email protected]. In addition to the contact named above, David Moser, Assistant Director; Timothy Carr, Brian Pegram, Erin Preston, Frank Todisco, Erik Wilkins- McKee, and Michael Willems made key contributions to this report. Defense Contractors: Information on the Impact of Reducing the Cap on Employee Compensation Costs. GAO-13-566. Washington, D.C.: June 19, 2013. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. Defense Acquisitions: Continued Management Attention Needed to Enhance Use and Review of DOD’s Inventory of Contracted Services. GAO-13-491. Washington, D.C.: May 23, 2013. Pension Costs on DOD Contracts: Additional Guidance Needed to Ensure Costs Are Consistent and Reasonable. GAO-13-158. Washington, D.C.: January 22, 2013. Federal Workers: Results of Studies on Federal Pay Varied Due to Differing Methodologies. GAO-12-564. Washington, D.C.: June 22, 2012. Defense Workforce: DOD Needs to Better Oversee In-sourcing Data and Align In-sourcing Efforts with Strategic Workforce Plans. GAO-12-319. Washington, D.C.: February 9, 2012. DOD Met Statutory Reporting Requirements on Public-Private Competitions. GAO-11-923R. Washington, D.C.: September 26, 2011. Military Personnel: Comparisons between Military and Civilian Compensation Can Be Useful, but Data Limitations Prevent Exact Comparison. GAO-10-666T. Washington, D.C.: April 28, 2010. Military Personnel: Military and Civilian Pay Comparisons Present Challenges and Are One of Many Tools in Assessing Compensation. GAO-10-561R. Washington, D.C.: April 1, 2010. Military Personnel: Guidance Needed for Any Future Conversions of Military Medical Positions to Civilian Positions. GAO-08-370R. Washington, D.C.: February 8, 2008. Military Personnel: Military Departments Need to Ensure That Full Costs of Converting Military Health Care Positions to Civilian Positions Are Reported to Congress. GAO-06-642. Washington, D.C.: May 1, 2006.
DOD must make cost-effective decisions in the use of its military, civilian, and contractor workforces, and CAPE issued guidance that provides a methodology for cost estimates and comparisons among workforces. The conference report accompanying the National Defense Authorization Act for Fiscal Year 2013 mandated that GAO review the cost methodology in Directive-Type Memorandum (DTM) 09-007 or its successor guidance to determine whether they reflect the actual, relevant, and quantifiable costs to taxpayers for work performed by these workforces. This report evaluates the extent to which (1) DOD's methodology reflects the full cost to the taxpayer, and (2) DOD's components incorporated the business rules in the memorandum and successor instruction into workforce mix decisions. GAO compared DOD's cost methodology to guidance from other government entities and interviewed officials from components applying the methodology, as well as other appropriate DOD officials. The Department of Defense (DOD) has improved its methodology for estimating and comparing the full cost to the taxpayer of work performed by military and civilian personnel and contractor support, but the methodology continues to have certain limitations. Best practices state that cost estimating rules should include a common set of standards that minimize conflicts in definitions, but DOD's methodology does not provide guidance for certain costs. For instance, its estimate of service training costs divides total training funding by the number of servicemembers. Using this method yields an average training cost of $6,490 per servicemember in the Army for fiscal year 2012. However, Army data show that training for a general aviation officer can be as high as $93,600 a year, while the training for an enlisted infantryman can be as low as about $4,600 a year. DOD's Cost Assessment and Program Evaluation (CAPE) office has not provided more specific direction on training costs, although some officials have requested it. Additionally, CAPE officials told GAO they did not include Reserve and National Guard personnel in the methodology because usually these personnel are used on a short-term basis. However, a portion of these personnel do serve in a full-time capacity. The Federal Accounting Standards Advisory Board has noted that a cost methodology should include any resources directly or indirectly used to perform work, and DOD relies on Reserve and National Guard personnel, for example, to provide airlift capabilities in support of military operations. Further, CAPE has not yet evaluated certain retirement-related cost elements. A portion of these cost elements may not be appropriate to include because they are not attributable to current military and civilian personnel. Without more specific direction in these areas, it will be more difficult for DOD to have reasonable assurance that its cost estimates and comparisons reflect the full and most accurate cost to the taxpayer of work performed by its various workforces. DOD components GAO examined generally have incorporated business rules contained in the memorandum and successor instruction into their workforce mix decisions, although DOD officials said opportunities to use the rules have been limited due to budgetary factors and few new or expanded missions. Moreover, implementation challenges exist. Some officials raised questions about the extent to which other officials throughout DOD are aware of a requirement to use the methodology for decisions other than in-sourcing. Further, CAPE recently completed a DOD-wide software tool for implementing its instruction, but at the time of GAO's review, some DOD components had developed their own tools. CAPE officials told GAO that the components' use of its DOD-wide tool will not be required, enforced, or monitored, and that CAPE has not reviewed the components' tools. Best practices state that to be reliable, cost estimates should be derived using a process that produces results that are accurate and can be traced, replicated, and updated. Assessing these tools would enable CAPE to identify the advantages and disadvantages of allowing multiple tools and provide reasonable assurance that cost estimates are reliable. Further, the instruction directs users to a General Services Administration (GSA) website for determining contractor support costs. GAO has reported on limitations of GSA's website such as its reporting of data that do not reflect post-competition prices. Without reliable data sources, DOD components may not be using the most suitable data needed to produce credible cost estimates. GAO is recommending that DOD develop further guidance on certain cost elements, such as training; develop business rules for estimating Reserve and National Guard costs; evaluate inclusion or non-inclusion of cost elements related to retirement; assess cost models being used across the department; and reassess sources for contractor data. DOD concurred with two and partially concurred with three of GAO’s recommendations. GAO continues to believe it is important for DOD to fully address the recommendations in order to achieve desired results.
User fees—charges individuals or firms pay for services they receive from the federal government—are not new but have begun to play an increasingly important role in financing federal programs, particularly since the Balanced Budget Act of 1985. Increases and extensions of user fees could be used to help meet the nation’s budget deficit reduction goals or increase the level of government services. User fees may be established in various ways. General user fee authority was established under title V of the Independent Offices Appropriation Act (IOAA) of 1952. The IOAA gave agencies broad authority to levy user charges on identifiable beneficiaries by administrative regulation. Before its enactment, an agency generally imposed fees only if it had specific congressional authorization to do so. User fee authority may also be granted through specific authorizing legislation. For example, the Congress mandated user fees in the authorizing legislation of a new program to certify agricultural products as organically grown. These fees will be used to fund administrative and review activities necessary to the certification process. Agencies may also request approval to charge user fees by including proposals to do so in their annual budget submissions to the Congress. The IOAA provides general guidance to agencies but is not specific enough to conclusively determine the appropriateness or amount of a user fee in a given situation. The federal Circuit Court of Appeals for the District of Columbia has interpreted the IOAA to mean that if a government service provides an “independent” public benefit, no user fee should be charged for that portion of the benefit. Furthermore, according to the 1993 version of Circular A-25, the latest guidance by OMB, if private individuals or firms receive the primary benefits of the government service and public benefits are “incidental,” then user fees could be charged for the full costs of providing the service. Because the IOAA and OMB’s guidance do not define “independent” or “incidental” public benefits, interpretations of these criteria have changed over time. For example, from the inception of the IOAA, the Food and Drug Administration (FDA) maintained that the public was the primary beneficiary of its services and that an independent public interest was involved, which precluded a user fee. However, FDA has recently argued that various identifiable private recipients are the primary beneficiaries of some of its services, for example reviewing applications for new human drugs, and that the existence of incidental public benefits does not preclude charging a user fee. To assist the agencies in determining when user fees are appropriate, in 1959 the Bureau of the Budget (now OMB) issued Circular A-25, which contained guidance for assessing user fees. The circular, last revised by OMB in 1993, states that “a user charge will be assessed against each identifiable recipient for special benefits derived from federal activities beyond those received by the general public.” According to OMB, a special benefit will be considered to accrue, and a user charge will be imposed when the government service enables the beneficiary to obtain more immediate or substantial gains or values (which may or may not be measurable in monetary terms) than those that accrue to the general public (e.g., receiving a patent, an insurance or guarantee provision, or a license to carry on a specific activity or business); provides business stability or contributes to public confidence in the business activity of the beneficiary; or responds to the request of or is provided for the convenience of the service recipient and is beyond the service regularly received by other members of the same industry or by the general public (e.g., receiving a passport, visa, or Custom’s inspection after regular duty hours). In determining the user fee amount, the circular states that “full costs” should be charged. Full costs include (1) direct and indirect personnel costs, including salaries and expenses for fringe benefits such as medical insurance and retirement; (2) physical overhead, consulting, and other indirect costs, including costs for utilities, insurance, travel, and rents; (3) management and supervisory costs; and (4) the costs of enforcement, research, regulation, and the establishment of standards. In some cases, the government supplies a service that provides a special benefit to an identifiable recipient and also provides a benefit to the general public. According to Circular A-25, when the public obtains benefits as a necessary consequence of an agency’s provision of special benefits to an identifiable recipient (i.e., the public benefits are not independent of, but merely incidental to, the special benefits), an agency need not allocate any costs to the public and should seek to recover from the identifiable recipient the full costs of providing the service. The federal government provides food-related services in four general categories—premarket reviews, regulatory compliance, import or export activities, and essential support. These services are provided to individuals, firms, and industries by six federal agencies in three departments. Both producers and consumers of agricultural products and commodities, such as beef, seafood, grain, vegetables, food additives, and animal drugs, benefit from these services. The food-related services provided by the federal government can be grouped into four general categories. Premarket reviews encompass a number of activities that take place before a product or commodity can be sold to a wholesaler or consumer and include (1) product approvals that allow companies to market specific products, for example, animal drugs, after they have been determined to be safe and effective; (2) quality grading to determine that certain commodities such as grain, beef, and some fruits meet established standards for quality and condition; and (3) permit issuance for activities such as the use of experimental biotechnological techniques. Compliance inspections are aimed at ensuring that regulated firms adhere to all applicable laws and regulations regarding product safety. For example, federal agencies need to periodically inspect manufacturing facilities and procedures to ensure the safety of food-related products. Import inspections and export certifications are made so that the government can attest to the quality and safety of products in international trade. For example, some countries to which the United States exports seafood require that the product be accompanied with a health certificate. Food-related products that are imported into the United States are inspected to ensure that they are safe and free of agricultural diseases and pests. Essential support activities, such as standard setting and laboratory analysis, support the government’s main inspection, grading, and compliance programs. Without these support activities, the programs could not fully operate. For example, to grade the quality of a food-related product requires a set of standards specifying the desired characteristics of the product. To analyze meat and poultry for the presence of pathogens and other contaminants requires laboratory services. Six federal agencies in three different departments provide food-related services. These agencies are FDA in the Department of Health and Human Services; the National Marine Fisheries Service (NMFS) in the Department of Commerce; and the Food Safety and Inspection Service (FSIS), Agricultural Marketing Service (AMS), Grain Inspection, Packers and Stockyards Administration (GIPSA), and Animal and Plant Health Inspection Service (APHIS) in the Department of Agriculture (USDA). FDA provides food-related services in two primary areas—premarket reviews and compliance inspections. The agency requires that new animal drugs are safe and effective for their intended use and that drug residues in animal tissue will not be harmful for human consumption. Food additives and colorings are reviewed primarily for safety. Before marketing a new animal drug, food additive, or color additive, sponsors are responsible for demonstrating the safety and, in the case of animal drugs, the effectiveness of their products by conducting studies and submitting data to FDA as part of a review process. FDA is responsible for reviewing these data and approving or denying the application. FDA also inspects domestic and imported food products (excluding meat, poultry, and some egg products) to ensure safety, wholesomeness, and accurate labeling. FDA performs a wide variety of compliance inspection activities. Among these activities, FDA periodically inspects the manufacturers and importers of food products such as cheese, canned food, and seafood to ensure that they are not contaminated with pesticides, filth, or pathogens, such as salmonella. Scientists analyze samples in FDA field laboratories, and compliance officers and others conduct enforcement actions when necessary. NMFS administers a voluntary seafood inspection and certification program. The purpose of the seafood inspection program is to facilitate consistent distribution of safe, wholesome, and properly labeled fishery products of designated quality. The inspection provides assurances of safety, wholesomeness, proper labeling, and quality of seafood to consumers and the seafood industry. NMFS conducts seafood inspection and grading activities for a wide variety of clients, including harvesters, processors, and retailers. Products inspected and certified by NMFS can bear one of several official marks, such as “U.S. Grade A.” FSIS ensures that meat, poultry, and some egg products moving in interstate and foreign commerce are safe, wholesome, and correctly marked, labeled, and packaged. Meat and poultry slaughterhouses receive continuous carcass-by-carcass inspections, while processing plants are inspected daily. Egg products processing plants are also subject to continuous federal inspection. Inspectors collect and send product samples to FSIS laboratories that test the samples for the presence of chemical residues or pathogens. The agency also inspects imported and exported meat, poultry, and egg products to ensure that they meet U.S. standards. AMS is responsible for inspecting and grading agricultural commodities and products, such as fruits, vegetables, meat, and poultry, as an aid in marketing these commodities. AMS also administers marketing agreements and orders that are designed to stabilize market conditions for milk and certain fruits and vegetables and improve producers’ revenues. In addition, the agency promotes fair trading practices in the sale of fresh and frozen fruits and vegetables. GIPSA facilitates the marketing of grain, lentils, and related commodities by ensuring uniform inspection and weighing, establishing descriptive standards, and certifying quality. GIPSA also helps ensure fair business practices in the livestock, meat, and poultry industries by guarding against fraudulent practices and providing payment protection. APHIS has responsibility for inspecting plants and animals within the country and those that are being imported to or exported from the United States to ensure that they do not spread agricultural pests and animal diseases. APHIS also regulates the development and use of genetically modified organisms and veterinary biological products by issuing permits and licenses for these activities. In addition, the agency is responsible for controlling the damage caused by wildlife to agricultural interests, natural resources, and human health, safety, and property. Of the about $1.6 billion available to the six agencies in fiscal year 1995 for food-related services, about $411 million was provided by user fees. The other nearly $1.2 billion was provided through general fund appropriations. The percentage of user fee funding varied by agency and the type of service provided. Certain types of activities, for example grading, were generally funded through user fees, whereas the costs of other activities, such as compliance inspections, generally were funded through general fund appropriations. The percentage of an agency’s food-related services funded through user charges, as shown in table 1, ranged from a high of 96 percent for NMFS to a low of 1 percent for FDA. Some agencies, such as NMFS, AMS, and GIPSA, received the majority of their funding from user fees while others, such as FDA and FSIS, did not. For example, in 1995 $116 million, or 67 percent, of AMS’ funding came from user fees, mostly for grading services, while FSIS received about $84 million, or 14 percent, of its funds through user fees, mostly charges for overtime incurred on inspections of meat, poultry, and egg products. Certain types of food-related service costs are generally funded through user fees, other types of service costs are sometimes funded by fees, and still other services are generally provided without charge to the service recipient. The costs of the federal premarket service of grading are the ones that are most likely to be funded through user fees. Drawing on the authority provided in the laws establishing the programs, AMS and GIPSA charge fees for the direct grading services they provide. For new product reviews, another premarket activity, fees are charged for the review of some products but not for others. For example, FDA charges some user fees for new human drug and color additive reviews but does not charge for new animal drug or food additive reviews. In addition, APHIS charges for the inspection of imported animal products, but FDA and FSIS do not charge for their import inspections. Mandatory compliance activities, conducted during regular duty hours, were the primary food-related activities not funded by user fees at the six agencies we reviewed. These activities include (1) inspecting the safety of meat and poultry, (2) monitoring the conditions under which food is manufactured, and (3) investigating violations of grain laws. (See app. I for the food-related services provided by each agency we reviewed and their funding sources during fiscal year 1995.) Potentially about $723 million in additional user fees could have been charged to program beneficiaries in fiscal year 1995, according to our review of selected food-related services. Applying OMB’s criteria in Circular A-25, we determined that these user fees could have been assessed in three areas. For agencies that collect partial user fees for certain services, additional fees of about $22.5 million could have been charged to cover the full costs of providing the service. About $49 million could have been charged by assessing user fees consistently for similar services, and $651.5 million could have been charged for certain other services that are currently provided without charge. For certain food-related services, the federal government charges user fees that are less than the full costs of providing the service. For example, some user fees are based on calculations that exclude the cost of activities essential to the service, such as the cost associated with developing program standards. For the food-related services that we reviewed, basing user fees on the full costs of providing the service would have allowed the federal government to charge about $22.5 million more in fiscal year 1995. Circular A-25 states that agencies charging user fees should recover the government’s full costs of providing the service, including both direct and indirect costs. According to the circular, indirect costs include among other things, essential support expenses for such things as establishing program standards. Establishing standards is essential to assessing the quality of an agricultural product or commodity. For example, in grading grain, GIPSA develops and maintains quality standards for water content, hardness, protein content, and other factors that determine the value of the grain on the market. In a 1981 report, we stated that public funding for standard-setting activities for grading services is appropriate as long as the standards primarily benefit commodity marketing industries as a whole and not just those requesting grading services. This view was based on OMB’s original 1959 version of Circular A-25, which stated that “no charge should be made for services when the identification of the ultimate beneficiary is obscure and the service can be primarily considered as benefitting broadly the general public.” However, the guidance in the 1993 revision of the circular allows agencies to charge service beneficiaries the full costs of providing the service, even though the public receives incidental benefits. For example, if the grain industry as a whole obtains benefits that are not independent of, but rather incidental to the special benefits provided to grading customers, under A-25’s guidance an agency should recover from these customers the full costs of providing the benefits. While GIPSA includes the costs of its direct grading activities in its user fee calculations, it excludes the costs of developing and maintaining the necessary standards—about $4.8 million in fiscal year 1995. Believing that standard-setting costs should be included in its user fee calculations, GIPSA requested congressional authority to charge for these activities in the President’s fiscal year 1997 budget. However, the Congress did not approve GIPSA’s request. GIPSA has again proposed charging user fees for standard-setting activities in the fiscal year 1998 budget. In opposition to GIPSA’s proposal, the National Grain and Feed Associationsaid (1) standard-setting activities have broad societal benefits and therefore should receive public funds, (2) new user fees are likely to fall disproportionately on exporters and effectively become a tax on U.S. agricultural exports, and (3) new user fees would weaken the official inspection system because higher costs would drive some domestic grain inspection customers away from the system. The last two concerns relate to the fact that grain for export is required to be inspected by GIPSA’s official grading system, but grain for the domestic market is not. GIPSA estimated that including standards development costs in its user fee calculations for grading services would increase the fees by about 13 percent, for example, from $31.50 per hour to $35.60 per hour. However, to lessen the burden on small firms, a fee schedule could be developed that would vary, for example, on the basis of the volume of grain inspected. According to a GIPSA official, for fiscal year 1995, including the cost of standard setting in the user fee calculations would increase the grain industry’s total cost by less than 1 percent. (App. II contains additional examples of programs with user fees that do not fully cover the costs of providing services.) The federal user fee structure for food-related services is inconsistent among similar services. For example, user fees are charged for premarket review of human drugs but not for animal drugs. Furthermore, while most air passengers and commercial vehicles entering the United States are charged a user fee to cover the government’s cost of agriculture inspections at the nation’s borders, some are not. Eliminating disparities in how user fees are charged for similar services would have allowed the federal government to charge an additional $49 million in fees in fiscal year 1995. To protect public health and the nation’s economic interests, federal review and approval is required on certain products before they can be marketed. FDA, for example, approves the efficacy and safety of human and animal drugs. The agency charges some user fees for new human drug reviews but does not charge for new animal drug reviews. The general approaches used for reviewing and approving human and animal drugs are very similar. Sponsors of both new human and animal drugs are responsible for demonstrating the safety and effectiveness of their products by conducting studies and submitting data to FDA as part of the premarket review process. In animal drug studies, the sponsors are required to demonstrate safety both to the animal and to humans consuming edible tissue of the treated animal. FDA is then responsible for reviewing the safety and effectiveness data and approving or denying the application to market the new drugs. FDA reviews applications both for animal drugs intended for use in food-producing animals as well as drugs for pets and other non-food-producing animals. In the Prescription Drug User Fee Act of 1992, the Congress approved some user fees for human drug reviews, stipulating that the revenues generated were to be used to help reduce the length of time needed to review new drug applications. In fiscal year 1995, FDA charged about $71 million in user fees for human drug reviews. However, no fees are charged for the review and approval of animal drugs. In fiscal year 1995, FDA received about $20.2 million in general fund appropriations to review new animal drugs. FDA’s review and approval of new animal drugs provides producers of these drugs economic benefits by (1) allowing them to market approved drugs and keeping unapproved animal drugs off the market, (2) increasing public confidence in the efficacy and safety of approved drugs, and (3) reducing the manufacturer’s exposure to liabilities associated with unknown side effects. By approving a firm’s new animal drug application, the government gives the applicant in effect a “license” to sell its drug and potentially derive substantial profits. The approval also provides the public with assurances that the drug is safe and will not leave residues that are harmful to humans. In a 1990 report, the Department of Health and Human Services’ Inspector General concluded that regulated industries, including the animal drug industry, should contribute to the cost of ensuring the safety and effectiveness of their products because they receive benefits from FDA’s regulatory activities. “In light of existing constraints on federal resources and the importance of regulating the safety and efficacy of animal drugs, providing FDA with specific authority to charge user fees for approving new animal drugs may be a viable alternative to appropriations for funding this program.” In a November 1994 study on the new animal drug review process, FDA found that by collecting some user fees the agency could accelerate the review process, which would result in speedier market entry and, therefore, increased financial benefit to the sponsors of new animal drugs. The study estimated that the financial benefits to industry from an improved animal drug review process would range from $16 million to $55 million. Since publishing its 1994 study, FDA has not proposed charging user fees for animal drug reviews, and the Congress has taken no action on the matter. According to FDA, some administrative and procedural changes are being made to reduce the time required to review and approve new animal drug applications. In addition, the animal health industry has come to believe that many improvements to the review process can be made without charging fees to hire additional reviewers. Finally, according to an OMB official, currently the Congress and FDA may view other potential candidates for user fees, such as medical device reviews, as a higher priority. The Animal Health Institute, which represents the interests of manufacturers of animal health products, opposes user fees for the review of new animal drugs. In case user fees are considered, according to the Institute, the funds collected should be used only to meet specific goals for improving the review process. The Congress stipulated that goals for reducing review time should be established and met when it approved some user fees for human drug reviews. In addition to inconsistencies in user fees for premarket reviews, user fees are not charged consistently for agricultural inspection activities at the nation’s borders and ports of entry. In 1991, APHIS began charging user fees to fund the inspection of international air passengers and commercial conveyances, such as aircraft, vessels, loaded railcars, and trucks, for the presence of plant pests and animal diseases. All air passengers and commercial vehicles entering the United States are charged a user fee—except those originating in, and entering from, Canada. Because Canadian agricultural products posed less risk to U.S. agriculture and thus required less frequent inspections, air passengers and commercial vehicles entering from Canada were excluded from the user fee charges. After reviewing the situation, in 1996 APHIS considered charging user fees for air passengers and commercial vehicles entering from Canada but did not go forward with such a proposal. According to APHIS officials, the proposal was not made because (1) agricultural products from Canada still constitute relatively little risk to U.S. agriculture, (2) the user fees would generate little revenue, and (3) the user fees may induce Canada to reciprocate by charging U.S. air passengers and commercial vehicles crossing into Canada a similar fee. While we do not know whether or not the Canadian government would reciprocate, we noted in our review of APHIS’ budget documents that risks to U.S. agriculture have increased and thus more inspections are required of products entering the United States from Canada. According to APHIS’ explanatory notes for its fiscal year 1997 budget, “increased traffic of untreated Asian and European agricultural products into the United States through Canada has created the need for increased inspections to reduce the risk of introducing exotic agricultural pests into this country.” Over the past 3 years, the value of agricultural imports from Canada has increased about 29 percent, from $5.2 billion to $6.7 billion. We estimate that charging air passengers and commercial vehicles entering from Canada the same fees charged for entries from all other foreign countries would have generated approximately $15.1 million in new revenues from the 1995 border crossings. The costs of collecting such fees would be minimal because the collection mechanism is already in place. However, some of the new revenues may be offset by additional program costs related to increased inspections. (App. III contains a discussion of other food-related services provided without charge that are similar to federal services for which user fees are charged.) The federal government provides many food-related regulatory services at no cost to the beneficiaries. For example, FSIS does not charge user fees for compliance inspections of meat and poultry slaughter and processing plants conducted during regular duty hours. In addition, AMS does not charge user fees to producers of specific commodities such as milk, fruit, and vegetables for establishing and overseeing marketing agreements and orders. Federal marketing agreements and marketing orders, established at the request of producers, set parameters aimed at stabilizing market conditions and improving producers’ revenues. Another $651.5 million in user fees could have been assessed in fiscal year 1995 if identifiable beneficiaries had been charged for food-related services that are currently provided without charge. Federal agencies routinely inspect food firms to help ensure that foods and food-related products are safe, wholesome, and properly labeled, according to federal standards. In some cases, agencies base the frequency and intensity of these inspections on their resources and assessments of the public risk associated with the firm, process, or product. In others, the inspection frequency is legally mandated. For example, current federal law requires that federal inspectors (1) examine each meat and poultry carcass slaughtered and (2) visit each meat and poultry processing plant at least daily. To meet these mandated inspection requirements in fiscal year 1995, FSIS inspected about 6,400 meat and poultry plants at a cost of about $523 million—the largest single federal food-related service cost.Currently, FSIS provides meat and poultry inspections at no cost during regularly scheduled shifts, which at larger plants may mean two or three shifts per day. For unapproved and unscheduled shifts, FSIS has the authority to charge for overtime inspections. Under this authority, FSIS was able to recover about $77 million of the $523 million it spent on mandatory meat and poultry inspections in fiscal year 1995. From 1986 to 1988, FSIS requested in its annual budget submissions the authority to charge user fees for all of its meat and poultry inspections. From 1994 to 1997, FSIS limited its request to the authority to charge user fees for any inspections beyond a single scheduled and approved primary shift. The Congress has not approved any of these requests. In its fiscal year 1998 budget submission, FSIS expanded its user fee proposal to charge for the salaries, benefits, and related costs associated with in-plant inspections of meat and poultry at all establishments inspected by the agency. FSIS estimated that it could charge about $381 million in fees for its direct inspection services. The proposal does not include user fees for indirect costs such as the agency’s administrative, supervisory, and other overhead costs. A variety of arguments have been raised in favor of charging user fees for mandated meat and poultry inspections. Those in favor of charging user fees argue that these inspections benefit industry by (1) providing inspected firms an economic advantage, (2) helping ensure public confidence in the safety and wholesomeness of the product, (3) performing quality control activities that are generally thought of as a plant’s responsibility, and (4) protecting against unfair competition from firms that might not otherwise perform adequate safety inspections in an attempt to lower their costs. Meat and poultry firms are not charged for FSIS’ inspections but benefit economically in several ways. First, firms must be federally inspected before they can market their products in interstate and foreign commerce. Second, firms marketing products that are not regulated by the meat and poultry acts, such as buffalo, venison, rabbit, emu, ostrich, and quail, must pay to receive an examination by FSIS and inspection stamp from USDA. (In fiscal year 1995, FSIS charged $1.1 million for these requested inspections.) Finally, FSIS officials believe that many people make their choice of meats based on the presence of a USDA stamp of inspection. In a 1981 report, we stated that “USDA’s inspections of meat and poultry processing plant operations clearly provide broad public benefits; therefore, appropriations funding is appropriate.” This view was based on OMB’s original 1959 version of Circular A-25, which stated that “no charge should be made for services when the identification of the ultimate beneficiary is obscure and the service can be primarily considered as benefitting broadly the general public.” However, the guidance in the 1993 revision of the circular allows agencies to charge service beneficiaries the full costs of providing the service even though the public receives incidental benefits. In the case of meat and poultry processors, the public benefit of safer meat and poultry, some believe, is incidental to the special economic benefits, such as increased marketability, that accrue to the processors. In a February 1996 letter to the Chairman and Ranking Minority Member of the House Agriculture Committee, we noted that federal meat and poultry inspections benefit industry by helping ensure public confidence in the safety and wholesomeness of its products. For example, federal inspections benefit industry by reducing the adverse publicity and potential liability costs that accompany outbreaks of foodborne illness.In its April 1996 report on FSIS’ inspections, USDA’s Office of Inspector General also concluded that industry derived special benefits, including increased public confidence, from federal inspections of meat and poultry. The report recommended that FSIS recover the cost of these inspections through user fees. Moreover, not charging for meat and poultry inspections also subsidizes plants’ quality control activities. We and others have testified in congressional hearings that industry should be more responsible for ensuring the safety and quality of its products. In this regard, FSIS is now considering a plan to allow plant employees to conduct some inspection activities with FSIS’ oversight. Historically, the Congress has believed that meat inspection costs, with the exception of overtime costs and voluntary services, should be borne by the federal government, because the public was considered the primary beneficiary. In addition, the American Veterinary Medical Association and the American Meat Institute oppose user fees for meat and poultry inspection. Veterinary Association officials said that they believe the public is the primary beneficiary of federal meat and poultry inspections and therefore user fees for such services are not appropriate. A position paper endorsed by a coalition of groups, including the Meat Institute, states, among other things, that user fees would (1) erode consumer confidence, (2) reduce the government’s incentive for improving the efficiency of inspections, (3) place the meat and poultry industry at a competitive disadvantage with foreign countries, and (4) place a serious burden on small businesses. However, other countries, including major meat producers and exporters such as New Zealand and Australia, charge for government meat inspection, and Canada plans to institute some user fees for meat and poultry inspections in 1997. Moreover, according to FSIS, if industry passed inspection costs along to consumers, the additional cost per pound would be negligible. FSIS estimated that a user fee covering all its inspection costs would increase consumer prices an average of 0.6 cents per pound. USDA’s Inspector General reported in April 1996 that for the small plants it reviewed, the cost of inspection services would be about 5 cents per pound. However, according to FSIS officials, fee schedules could be developed that would lessen the burden on small plants. In addition to inspection activities at FSIS, the federal government provides a number of other food-related services without charge to identifiable beneficiaries. One example is the marketing agreements and orders program at AMS. Milk marketing agreements and orders stabilize market conditions to ensure an adequate supply of milk by establishing the prices handlers pay to dairy producers. Fruit and vegetable marketing orders (1) promote adherence to quality and maturity standards, (2) establish orderly marketing through controls on the amount of a commodity available for sale, and (3) support appropriate research and development projects. When growers or handlers submit a proposal for a new marketing order to USDA, AMS analyzes the proposal and, if it appears feasible, holds a public hearing. On the basis of AMS’ analysis and the hearing evidence, USDA issues a recommended decision along with the proposed order. After considering any comments on the proposed order, USDA issues the final order. AMS then holds a referendum among the producers, two-thirds of whom must approve the order before it is put into effect. After marketing orders are approved and put into effect, AMS monitors the operation of each order by, among other things, processing formal and informal amendments to the order and promoting compliance with it by taking legal sanctions against violators. These activities were funded with $10 million in general fund appropriations in fiscal year 1995. The day-to-day administration of marketing agreements and orders is conducted at the local level by boards, committees, and market administration officers. The costs of local administration are paid for through fees assessed on the producers and handlers covered under the agreements and orders. AMS believes that marketing agreements and orders provide benefits to producers, handlers, and consumers. These benefits include stable markets, fair prices, and dependable supplies of milk and fruit. Nevertheless, marketing orders are provided at the request of particular groups of agricultural producers to benefit their members. According to OMB’s Circular A-25, when a government service responds to the request of or is provided for the convenience of the service recipient, a special benefit will be considered to accrue and a user fee should be imposed. AMS estimates that charging user fees for the approximately $10 million obligated annually for this program would have no significant impact on retail prices. While the user fees charged would vary among the numerous commodities covered, the total charges would represent about 0.05 percent of the commodities’ sale value in 1995. For example, user fees for milk marketing orders would amount to about 0.03 cents per gallon of milk. Neither the Congress nor the affected industries have supported user fees for marketing orders. The Congress has denied AMS’ request to charge user fees for these services for the last 14 years. Recently, AMS again requested user fee authority for these services in its fiscal year 1998 budget submission. The industry groups that we contacted either favored continued public funding or had no opinion on funding. For example, the National Milk Producers Federation, one of the many groups that represent growers and handlers covered by marketing agreements and orders, prefers public funding for the services. Only if user fees were necessary to continue the program, would the federation support them. The United Fresh Fruit and Vegetable Association had no position on who should pay for marketing agreements and orders. (App. IV contains a discussion of other food-related services that benefit specific firms or industries but are provided without charge.) Opportunities exist to charge additional user fees for some federal food-related activities. Such fees would (1) provide new federal revenues that could be used to help reduce the federal deficit or increase program services and (2) eliminate inconsistencies that currently exist in charging for federally provided services. In our view, given the guidance provided by Circular A-25, the case for charging user fees is the strongest where the government does not charge the full costs of providing the service and where current user fees are applied inconsistently. However, charging user fees for regulatory compliance inspections from which firms and industries derive specific benefits, such as mandated meat and poultry inspections, would produce the most revenue. Concerns about the appropriateness of user fees for food-related services center on three primary issues. First, in addition to benefitting specific firms or industries, these services, for example, inspections of imported foods, often also benefit consumers. Second, increased user fees for certain food-related services could have an adverse economic impact on small producers. And third, some believe that the federal government should not charge for activities, such as meat and poultry inspections, that are required by law. We provided a draft of this report to NMFS, AMS, APHIS, FSIS, GIPSA, and FDA for review and comment. The Department of Commerce, commenting for NMFS, generally concurred with the accuracy of the report but indicated that funding for the National Seafood Inspection Laboratory should not be included in the seafood inspection program’s funding because the laboratory is not an essential part of the inspection program (see app. V). We concur and have deleted discussion of the laboratory from the final report. We met with officials from AMS, APHIS, FSIS, and GIPSA to obtain their comments. Agency officials providing comments included (1) AMS’ Assistant Deputy Administrator, Executive Resources Office, as well as program officials responsible for fruit and vegetable, dairy, seed, and compliance activities; (2) APHIS’ Deputy Administrator for Management and Budget; (3) FSIS’ Associate Administrator for Field Operations and Director of the Budget and Finance Division; and (4) GIPSA’s Administrator and Deputy Administrators for grain inspection and packers and stockyards. These officials generally agreed with the report’s findings and conclusions and provided us with clarifying technical comments that we incorporated into the report as appropriate. FDA had no comments other than technical suggestions and clarifications that we also incorporated into the report as appropriate. To identify and evaluate opportunities to increase the share of funding paid for by beneficiaries of food-related services, we reviewed various studies and literature on user fees and interviewed (1) budget and program officials at OMB and the six federal agencies that provide food-related services and (2) representatives of industry groups that would be affected by additional user fees. We also analyzed agencies’ user fee proposals and budget requests, congressional reports on agency appropriations, and industry position papers on user fees. We asked the agencies to provide user fee and appropriations funding data for their food-related activities. We did not verify the accuracy of these data. Our work was conducted from April 1996 through January 1997 in accordance with generally accepted government auditing standards. (See app. VI for a more detailed explanation of our methodology.) We are sending copies of this report to the Secretaries of Agriculture, Commerce, and Health and Human Services. We will also make copies available to others on request. Major contributors to this report are listed in appendix VII. Processed fruit and vegetable grading Fresh fruit and vegetable grading Licensing/reparations (PACA) Fees received from petitions for the review of new color additives totaled $12,000. Fees received from export certificates totaled $3,650 and are not kept by the agency but go to the U.S. Treasury. Federal agencies charge user fees for a number of food-related services, but the fee calculations for some of these services exclude the support costs that are essential to providing them. In addition to the grain standard-setting activities discussed in the body of this report, several other food-related services charge user fees that do not take all program costs into account. Specifically, user fees do not take into account the following costs: (1) the Grain Inspection, Packers and Stockyards Administration’s (GIPSA) costs for developing grain measurement methods and compliance activities; (2) the National Marine Fisheries Service’s (NMFS) costs for developing standards and conducting other essential activities related to its seafood inspection program; (3) the Food and Drug Administration’s (FDA) full costs for issuing food product export certificates and reviewing color additives for foods, drugs, cosmetics and medical devices; and (4) the Animal and Plant Health Inspection Service’s (APHIS) costs for controlling the damage that wild animals do to agricultural interests, natural resources, and human health, safety, and property. Table II.1 lists food-related services that we reviewed for which the user fees do not account for the full costs. This appendix provides a general description of these services, the level of federal funding they receive, and arguments for and against increasing user fees to account for the full costs. GIPSA conducts a number of activities that are essential to its grain inspection and grading services. These activities include (1) developing and implementing new methods for measuring grain quality and (2) ensuring the quality of the program through regulatory compliance activities. Currently, none of the costs of providing these services are included in GIPSA’s user fee calculation for inspection and grading services. GIPSA’s grain inspection and grading program was established to facilitate the marketing of U.S. grains, oilseeds, rice, and related commodities. GIPSA, as required by the U.S. Grain Standards Act and the Agricultural Marketing Act, inspects and weighs almost all exported grain shipments. While grain for domestic use is not required to be inspected and weighed, GIPSA provides these services on a voluntary basis. GIPSA charges customers with contracts a user fee of $31.50 per hour for its services. However, the user fee calculation was based only on direct program activities, such as inspection and weighing, and did not include the $1.5 million GIPSA received in 1995 to identify, evaluate, and implement new or improved methods for measuring grain quality. GIPSA’s methods development activities help ensure the continued integrity of inspection certificates and, as such, are essential to its grain inspection and grading program. According to the Office of Management and Budget’s Circular A-25, user fees should recover the full costs of providing the service, including all direct and indirect costs. The National Grain and Feed Association opposes new user fees for activities that support GIPSA’s grain grading services, arguing that (1) appropriated funds are the only fair way to fund these activities because of their broad societal benefits, (2) new user fees are likely to fall disproportionately on exporters and effectively become a tax on U.S. agricultural exports, and (3) new user fees would weaken the official inspection system because higher costs would drive some domestic grain inspection customers away from voluntary inspections. The National Association of Wheat Growers voiced similar concerns. The Association argued that new fees are likely to fall disproportionately on grain exporters. To compensate for the decrease in revenues associated with reduced grain exports, GIPSA would then need to further increase fees for its remaining customers to fund future operations. GIPSA spends about $4.7 million annually ensuring the quality of its grading program. GIPSA investigates violations of applicable grain laws, licenses personnel to grade grain, and maintains an international monitoring program that interacts with foreign governments and responds to complaints concerning the quality and quantity of U.S. grain shipments. GIPSA also performs management evaluations and procedural reviews of its field offices and compliance reviews of the state and private agencies designated to inspect grain. In addition to the grain grading work of its own employees, GIPSA has designated 17 state and 48 private agencies with authority to officially inspect and weigh domestic grain. Eight of the 17 states, called delegated states, also have the authority to inspect and weigh export grain. Criteria for becoming a designated or delegated state or private agency for GIPSA include (1) having licensed personnel, (2) providing training to maintain skills, (3) using approved equipment, and (4) keeping proper records. State and private agencies operate under a 3-year agreement that can be renewed. While GIPSA charges user fees for its direct grain grading activities, no fees are charged for compliance activities such as (1) approving the states and private companies who want to participate in the program, (2) licensing grain inspectors and weighers, or (3) investigating violations of applicable grain laws. The grain industry benefits from GIPSA’s compliance activities, because they give grain purchasers confidence that the grain is inspected and weighed properly, meets U.S. grain standards, and can be sold for export. Thus, these activities are part of the full costs of grain inspection, as defined by OMB Circular A-25. Neither GIPSA nor the grain industry supports charging user fees for GIPSA’s compliance activities. GIPSA officials said that the agency has not proposed user fees for compliance activities because (1) it is difficult to identify specific beneficiaries and (2) increased user fees could decrease the number of voluntary inspections performed and thereby reduce agency revenues. Just as they opposed user fees for standard setting and methods development activities, industry groups also oppose user fees for GIPSA’s compliance activities. Table II.2 shows how grain inspection user fees would increase to cover the full costs of the service, according to GIPSA. If all costs were included, fees would increase from $31.50 per hour to $40.96. Charging user fees to cover GIPSA’s full costs should have a minimal impact on the cost of grain. The total cost of grain grading user fees, including the cost of standard setting, methods development, and compliance activities, represented less than 0.3 percent of the value of grain exports in fiscal year 1995. NMFS’ seafood inspection program includes a number of activities essential to its voluntary seafood inspection and grading services. NMFS develops and maintains processing, inspection, and grading standards. NMFS develops and supports a sensory science program that uses touch, sight, and smell to determine the wholesomeness and quality of fish and fish products. As part of this program, the agency develops standard definitions and terms of seafood freshness and decay and trains inspectors on how to consistently identify and describe these factors. NMFS also collects, translates, analyzes, codifies, and disseminates seafood import requirements of foreign governments and buyers and develops seafood export certificates that meet these requirements. NMFS received about $620,000 in general fund appropriations in fiscal year 1995 to perform these services. Currently, none of the costs of providing these essential support services are included in NMFS’ user fee calculations for inspection and grading services. Program standards and sensory evaluation techniques are essential elements of NMFS’ seafood inspection and grading services, because without them NMFS inspectors would not be able to attest to the quality and safety of the seafood they examine. Furthermore, NMFS’ foreign requirements work directly benefits seafood exporters. All of these costs fall under the Circular A-25 definition of full costs. Under its original proposal to operate a stand-alone, self-supporting, “performance-based” organization, NMFS included the cost of conducting standard setting, sensory evaluation, and foreign requirements activities in its user fee calculation. In November 1996, NMFS revised its proposal to include only the costs of sensory evaluation in the user fee calculation. The revised approach responded to customer arguments that the benefits of standard-setting and foreign requirements activities did not accrue solely to inspection customers but rather were broadly shared by the seafood industry. However, other agencies have concluded that essential support costs, such as developing and maintaining standards, should be included in the user fee calculation. The Agricultural Marketing Service includes the cost of standardization activities in its user fee calculations, and GIPSA has proposed including such costs in its user fee calculations. Seafood inspection and grading service customers have been generally supportive of NMFS’ efforts to create a performance-based organization. However, because the program is voluntary, if customers find the fees too high for the benefits received, they may choose not to participate. NMFS officials said that including sensory evaluation costs in the user fee calculation should not increase inspection and grading user fees, because under their performance-based organization proposal some administrative overhead costs would be eliminated. Apart from omitting essential support costs, such as standard setting, from user fee calculations, some federal agencies charge only a nominal fee that bears little relation to the actual costs of providing the service. For example, FDA does not cover the costs of providing export certifications or reviewing color additives. FDA could obtain about $730,000 annually if it charged fees that were sufficient to cover the costs of providing these services. Some foreign countries require that food-related products exported to their countries be accompanied by a certificate from FDA. FDA’s certificates of export generally indicate that the product can be freely sold in the United States and that there are no known safety concerns about the product or the company that manufactures it. FDA issues certificates after determining that the product, among other things, meets the specifications of the foreign purchaser and is not in conflict with the laws of the country to which it is intended for export. The certificate consist of three parts: (1) a letter to the firm explaining FDA’s responsibilities concerning the product, (2) a “to whom it may concern” letter that is intended for the importing country, and (3) the certificate with the Department’s seal and ribbon attesting to the facts in the “to whom it may concern” letter. Currently, FDA charges a $10 fee for each export certificate it issues for food products. Although FDA does not know exactly what it costs to issue a food-related export certificate, it recently estimated that on average agencywide, it costs about $250 to process an export certificate. Using this figure, we estimate that in fiscal year 1995 FDA spent about $91,250 processing 365 export certificates, but it charged only $3,650 in fees. In addition, FDA issues export health certificates for seafood being exported to the European Union. These certificates state that the shipment was produced in an establishment covered under a regulatory oversight program equivalent to those in place in the European Union. FDA issued 8,884 seafood export health certificates in fiscal year 1996. FDA does not have any estimate of the cost of processing these certificates. The agency does not charge user fees for seafood export certificates although the National Marine Fisheries Service’s seafood inspection program does. OMB’s Circular A-25 states that agencies charging user fees should recover from the service beneficiary the government’s full costs of providing the service, including both direct and indirect costs. Export certificate recipients benefit from FDA’s export certification, for example, by being able to market their products in international commerce. The benefits related to export certification have been recognized by NMFS, the Food Safety and Inspection Service, and APHIS, which all charge user fees for providing export certificates. FDA is responsible for reviewing and approving new color additives used in foods, drugs, cosmetics, and medical devices. The approval of a color additive is initiated by a petition from the manufacturer to FDA. The petition contains information on the intended use, chemical composition, methods used to produce the colors, and the results of various tests. FDA analyzes the petition and supporting data and determines if the color can be approved. According to FDA, between 1986 and 1995 the agency averaged fewer than five petitions for new color additives each year, which includes amendments to already regulated color additives. Once a color additive is approved and listed by FDA, it can be manufactured by anyone. FDA charges a fee of $3,000 for petitions requesting the approval of a color additive for use in or on foods only. The fee for this review was set at $3,000 in 1963 and has not changed since. FDA does not know the exact cost of reviewing color additive petitions but according to the agency, the current fees do not cover the full costs of the review process. FDA estimates spending about $163,000 per food and color additive petition although this average is heavily weighted by food additive petitions. Using this cost estimate, FDA spent about $652,000 in fiscal year 1995 reviewing four petitions for color additives. FDA is now studying what the actual review costs are for all food and color additive petitions. OMB’s Circular A-25 states that agencies charging user fees should recover from the service beneficiary the government’s full costs of providing the service, including both direct and indirect costs. By approving a petition for a new color additive, the government allows the color additive to be marketed, which may result in a financial gain for the firm or industry which submitted the petition. The purpose of the animal damage control program, as established under the Animal Damage Control Act of 1931, as amended, is to control damage caused by wildlife to agricultural interests, natural resources, and human health, safety, and property. Efforts to protect livestock from predators, primarily coyotes, constitutes one of the major program activities. Livestock protection activities are carried out primarily in the 18 western states. Animal damage control operations were funded with $41.2 million ($20.4 million in nonfederal funds and $20.8 in federal general fund appropriations) in fiscal year 1995. While program funding varies from state to state, the federal government provided about 51 percent of the fiscal year 1995 funding. The remaining program funds came from state and local governments and program beneficiaries such as grazing boards. For example, funding for fiscal year 1995 program activities in the state of Nevada was 54 percent federal, 39 percent state, 6 percent wool growers association and grazing boards, and 1 percent local (cities and utilities). The animal damage control program’s livestock protection activities in the 18 western states primarily benefit livestock producers and others who own livestock herds. For example, in Nevada 90 percent of the program consists of livestock protection. Ten sheep operators own about 90 percent of the sheep in Nevada, and in some cases, animal control specialists work full time protecting one sheep company’s herd. As such, proponents of charging user fees argue that the recipients of animal damage control services receive “special benefits” as defined in OMB’s Circular A-25 and should be charged the full costs of providing the service. According to APHIS, fully funding livestock protection activities through user fees could have resulted in about $10.1 million in additional revenue in fiscal year 1995. However, in the past when federal funding for these activities decreased, nonfederal contributions decreased as well. Those opposed to increasing the share of program funding paid for by service beneficiaries have raised several concerns. First, some believe that controlling damage caused by wildlife is inherently a government responsibility because wildlife is a publicly owned resource of the United States. Second, instead of paying for APHIS animal damage control services, some ranchers may try to save money by controlling predators with illegal poisons thereby creating human health or environmental safety hazards. Finally, predators cross property boundaries so that if one rancher pays for federal animal damage control services, a neighbor who does not may still benefit. APHIS has not studied what the impact on ranchers would be of charging user fees for the full costs of providing animal damage control services. Assessing the overall impact on ranchers would be difficult, according to an APHIS official, because each state operates and funds its program differently. The federal government does not consistently charge user fees for similar food-related services. In addition to the premarket review of new animal drugs and border inspections of agricultural products discussed in the body of this report, FDA’s review of new food additives and APHIS’ issuance of licenses and permits for veterinary biologics and biotechnology activities bear no user fees. In contrast, user fees are charged for similar licensing and approval activities, such as FDA’s color additive reviews. Table III.1 lists the food-related services that we reviewed for which no user fees are charged, even though fees are charged for similar services. This appendix provides a general description of these services, the level of federal funding they receive, and arguments for and against recovering their full costs through user fees. In addition to reviewing and approving new animal drugs, FDA is also responsible under section 409 of the Federal Food, Drug, and Cosmetic Act, for approving new food additives, such as artificial sweeteners. Sponsors of new food additives must conduct scientific studies to establish the safety of their products. FDA then evaluates the scientific data submitted in support of a petition to approve a new food additive to ensure that it is safe for its intended use. Between 1986 and 1995, FDA received an annual average of 55 petitions for food additive approvals. FDA estimates that it spent about $7.34 million in fiscal year 1995 reviewing and approving food additive petitions.However, according to FDA, the food additive program is being redesigned, and a study is being conducted to more accurately estimate the cost of food additive reviews. FDA believes that a more timely, predictable process would likely cost more than an average of $163,000 per petition. FDA charges no user fees for reviewing petitions for new food additives. In contrast, FDA charges fees for reviewing petitions for new food and drug colors and applications for human drugs. (Discussed previously on pp. 38-39 and p. 12.) Those who support charging user fees for food additive reviews argue that industry receives special benefits as a result of these reviews. OMB’s Circular A-25 states that a special benefit exists if a government service “enables the beneficiary to obtain more immediate or substantial gains or values than those that accrue to the general public.” By approving a petition for a new food additive, the government allows the food additive to be marketed, which may result in a substantial financial gain for the firm or industry that submitted the petition. According to an FDA official, the primary incentive a company has for applying for a food additive approval is to “get a corner on the market.” Once the petition has been approved, the company will be able to use the additive before anyone else. That timing gives the company an advantage over its competitors. In addition, FDA’s approval validates the firm’s efforts to produce a safe and effective product and contributes to public confidence in the firm and its products. In a 1987 report, the Department of Health and Human Services’ Inspector General identified FDA’s review and approval of food additive petitions as an activity that had the potential for charging user fees. The Grocery Manufacturers of America, a trade association representing the food industry, is opposed to user fees because food additive approvals are not proprietary and thus do not provide the economic rewards that drug approvals do. The grocery manufacturers favor an alternative system of nongovernmental, third-party reviews of food additive petitions that would be paid for by the applicant. APHIS regulates the development and use of veterinary biological products and genetically modified organisms to help prevent the use of ineffective or unsafe products. As part of this regulation, APHIS issues licenses and permits to the producers of veterinary biological and biotechnology products. APHIS does not charge user fees for providing these services. To prevent the importation, production, and distribution of impure, ineffective, unsafe, or impotent veterinary medicines and to regulate veterinary medicine manufacturing, APHIS licenses drug companies that produce veterinary biologics and issues permits for the manufacture and sale of each approved biologic. Veterinary biologics are medicines for the diagnosis, prevention, and treatment of diseases in animals. A small portion of these medicines are made using biotechnology. Veterinary biologics are different from animal drugs in that they generally attack the animal’s immune system, causing the body to react to the medicine. Animal drugs, on the other hand, attack the disease itself and are regulated by FDA. In both its fiscal year 1996 and 1997 budget requests, APHIS proposed charging user fees for licensing, inspecting, and testing veterinary biologics. Specifically, APHIS proposed a general license fee for approving establishments to manufacture approved biologics products, a permit fee to manufacture each specific product, and a transit permit fee for the movement of biologics for research and evaluation and for the distribution and sale of biologics. APHIS estimated that it could charge user fees of about $3.5 million for its services. Firms that manufacture, sell, transport, research, and evaluate veterinary biologics derive some specific identifiable benefits from APHIS services. Without an APHIS license or permit, veterinary biologics cannot be field-tested or produced, imported, transported interstate, or sold on either the domestic or international markets. An APHIS license or permit also enhances public confidence that the veterinary biologic will not harm public health or the environment. In addition, APHIS licenses and permits help protect the livestock and pet industries from unfair competition by excluding firms that might manufacture unsafe or impure products. Thus, these services qualify for user fees under OMB’s Circular A-25. In addition, courts have ruled that when a license is a prerequisite to operating in a given industry, obtaining a license provides a special benefit that justifies a user fee. Other agencies that issue licenses or permits charge user fees. For example, the Nuclear Regulatory Commission charges a user fee for licensing firms to operate nuclear power plants. To date, the Congress has not approved APHIS’ veterinary biologics user fee proposals. We did not find any public record of the Congress’ reasons for not approving them. APHIS has again proposed user fees for its veterinary biologics licensing, inspection, and testing activities in its fiscal year 1998 budget. The animal drug industry has also not supported APHIS’ user fee proposals. The Animal Health Institute, which represents the animal drug industry, believes that the assessment of any fees on veterinary biologics would be detrimental to small firms, possibly forcing them to abandon needed products. In addition, the Institute believes that the biologics program benefits the U.S. population as a whole. The goal of APHIS’ biotechnology program is to approve innovative biotechnology techniques and processes that benefit the agricultural industry while protecting the environment. Biotechnology involves developing products that make use of genetically engineered organisms. New biotechnology techniques or processes are primarily used for improving agricultural crops. For example, a company may develop a new kind of soybean by genetically combining two different kinds of soybeans. The new soybean may grow faster, be more resistant to weather, or contain more vitamins and minerals than any other kind of soybean on the market. In order to market products that are manufactured or produced through new biotechnology, a company must obtain a permit from APHIS. APHIS issues three types of permits: (1) an import permit, (2) a transit permit for interstate movement, and (3) a permit to release the product to the environment, for example, disposing of product waste in a landfill. Before issuing a permit for field testing biotechnology products, APHIS reviews plans for field testing and the results of any preliminary tests. APHIS also analyzes the products’ environmental impact to ensure compliance with environmental laws and regulations. In both its fiscal years 1996 and 1997 budget requests, APHIS proposed charging user fees of about $1 million for its biotechnology services. The proposed fees would cover the direct cost of investigating and issuing notifications, petitions, and permits to use genetically engineered products. If indirect costs were included in the user fee, permit applicants would be charged an additional $2.3 million in fees. Without a permit from APHIS, a biotechnologically derived product cannot be field-tested or produced, imported, transported interstate, or sold on either the domestic or international markets. In addition, federal approval of the product fosters public confidence that the technology and resulting products will not harm public health or the environment. Thus, APHIS’ permit reviews qualify for user fees under Circular A-25 and legal precedent. APHIS has been unsuccessful in obtaining congressional approval for these user fees. We did not find any public record of the Congress’ reasons for not approving the proposals. APHIS has again proposed user fees for the issuance of biotechnology certificates in the agency’s fiscal year 1998 budget submission. The biotechnology industry in general opposes APHIS’ proposed user fees. If user fees were too high, the industry argues that companies might be forced to do their work in Europe. In addition, some argue that since this program also benefits the public by ensuring that genetically engineered or modified plants or organisms do not put livestock or crops at risk, user fees should not be charged. We estimate that a user fee covering all the agency’s costs of issuing biotechnology permits, notifications, and petitions would run from about $400 per permit and notification, the most common types, to $3,680 per petition, which requires an APHIS environmental study. Biotechnology customers are generally large corporations such as Upjohn, Calgene, and Monsanto. The federal government provides a number of food-related services for which no user fees are charged. In addition to meat and poultry inspection and marketing agreements and orders discussed in the body of this report, other federal services provided without charge that benefit specific individuals or industries include (1) the Food Safety Inspection Service’s (FSIS) egg product inspections, laboratory services, pathogen reduction activities, and import inspections; (2) the Agricultural Marketing Service’s (AMS) seed regulatory activities; (3) the Grain Inspection, Packers and Stockyards Administration’s (GIPSA) regulatory oversight of packers and stockyards; and (4) Food and Drug Administration’s (FDA) domestic and import compliance inspections. Table IV.1 lists the food-related services we reviewed that are provided without charge to identifiable beneficiaries. This appendix provides a general description of these services, their fiscal year 1995 funding, and arguments for and against charging user fees. In addition to inspecting domestic meat and poultry, FSIS (1) inspects domestic egg products and imported meat, poultry, and egg products and (2) provides laboratory analysis and pathogen reduction services to support its inspection program. No user fees are charged for these services. The Egg Products Inspection Act of 1970, as amended, mandated that the federal government inspect egg processing plants to ensure that egg products are safe, wholesome, and properly labeled. An egg products plant inspection includes, among other things, a preoperations sanitary inspection, a rodent control program, and testing for the presence of pathogens such as salmonella. A federal inspector must be present at all times for an egg products plant to operate. FSIS inspects egg products without charge during regularly scheduled shifts, which may include several shifts daily, but charges user fees for overtime, or unscheduled shift inspections. In fiscal year 1995, FSIS obtained from user fees about $710,000 of the $11.57 million funding for egg product inspections. In fiscal years 1996 and 1997, FSIS proposed charging user fees for egg product inspections conducted during nonprimary shifts. In its fiscal year 1998 budget request, FSIS proposed user fees for the salaries, benefits, and related costs associated with in-plant inspections of egg products at all establishments inspected by the agency. Such user fees, according to USDA estimates, would generate about an additional $9 million in revenues. A variety of arguments have been made in favor of user fees for FSIS inspection activities. In a recent report, USDA’s Inspector General concluded that federal inspections of egg products provided special benefits to industry, the costs of which should be funded through user fees. These special benefits include (1) helping assure public confidence in the safety and wholesomeness of the product, (2) permitting a plant to sell its products interstate and overseas, and (3) allowing FSIS’ stamp of inspection to be used as a marketing tool to promote the product’s superior quality. The Congress did not approve FSIS’ fiscal year 1997 request to charge user fees for egg products inspections because it viewed the public as the primary beneficiary of such inspections. The Congress has generally taken the position that the costs of mandated inspections, with the exception of overtime and voluntary services, should be borne by the federal government. The United Egg Association, an industry trade association, opposed user fees for egg inspections, stating “these federally mandated programs were created solely to provide for the health and safety of the American people.” Furthermore, according to the Association, “the integrity of the food inspection programs and the need to ensure public confidence in the safety of food products has been part of the historical basis for public funding of mandatory food inspection programs.” Neither FSIS nor AMS, which administered the program before FSIS, have estimated the cost impact of charging a user fee for the inspections of egg products. However, based on USDA’s fiscal year 1995 funding for egg products inspection services, user fees would likely increase the cost to egg products producers by less than a half-cent per pound. The FSIS laboratory services and pathogen reduction programs support meat, poultry, and egg product inspections through the scientific examination of these products for disease, contamination, or other forms of adulteration. FSIS operates three multidisciplinary laboratories and also accredits about 200 private laboratories to carry out food safety and composition tests. FSIS and accredited laboratories test for antibiotic residues, chemical residues, microbiological contamination, pathology, and serology. Testing is also done for processed product composition and economic adulteration, such as testing for moisture, fat, protein, or salt content in ham or poultry. Laboratory samples come from two sources—FSIS inspectors and plants. As part of the meat, poultry, and egg inspection programs, FSIS laboratories analyze, without charge, inspector-collected samples. The FSIS laboratories also analyze plant-supplied samples, but charge the plant for the services provided. In fiscal year 1995, the laboratory services program received about $19.24 million in funding. About $1.22 million came from user fees for overtime, laboratory accreditation, and analysis of plant-supplied samples. Similar to the laboratory services program, the pathogen reduction program provides essential support to the meat, poultry and egg inspection programs. The program’s goal is to control microbial contamination of meat and poultry products from farm to table, and work is performed at all three FSIS laboratories. In fiscal year 1995, FSIS’ pathogen reduction program received about $10.2 million in general fund appropriations; no user fees were charged. FSIS laboratory services and pathogen reduction programs are essential support elements of the meat, poultry, and egg inspection services. Laboratory services and pathogen reduction were a single budget account until 1994, when for visibility purposes, FSIS separated them into two different accounts. Nonetheless, these programs would most likely not exist or at least be much smaller if it were not for the meat, poultry, and egg products inspection programs. If user fees were charged to cover FSIS meat, poultry, and egg inspection costs, the fee calculation, according to OMB’s criteria, should include the costs associated with the laboratory analysis and pathogen reduction activities. OMB Circular A-25 states user fees should cover the government’s full costs, including essential support costs, such as laboratory analysis. USDA’s Inspector General recently recommended that FSIS either seek statutory authority to assess, collect, and retain user fees for its laboratory services or require the plants to assume financial responsibility for laboratory testing costs by having them send FSIS inspector-selected samples to accredited laboratories, with the plant paying directly for testing costs. The Inspector General also recommended that FSIS seek user fees for its pathogen reduction services, because they help ensure product quality and add to public confidence. Historically, the Congress has believed that meat inspection costs, including laboratory testing, should be borne by the federal government, because the public was the primary beneficiary. Others opposed to the fees argue that if laboratory analysis were entirely paid for by industry, the results of the laboratory analysis would lose some credibility. If industry passed the costs of FSIS laboratory analysis, pathogen reduction, and meat and poultry inspection services along to consumers, the additional cost per pound would be negligible. For example, FSIS estimates that a user fee covering all of these inspection-related costs would increase consumer prices for meat and poultry an average of about one-half cent per pound. Federal meat, poultry, and egg products inspection laws require that countries exporting these foods to the United States impose inspection requirements at least equal to U.S. requirements. An inspection document certifying that the products meet U.S. standards and issued by the responsible official of the exporting country must accompany each shipment offered for entry into the United States. FSIS spent about $11 million inspecting 2.6 billion pounds of imported meat, poultry, and egg products in fiscal year 1995. FSIS inspections of imported meat, poultry, and egg products consist of checking (1) the exporting country’s certifications and manifests to ensure that the number of packages in the lot agree with what is on the manifest and that no damage has occurred in transit, (2) the labels for truthfulness, and (3) the product for wholesomeness using organoleptic (sight, smell, and touch) techniques. Import inspectors also routinely pull import product samples for laboratory analysis. Those in favor of charging user fees for FSIS import activities argue that importers receive a special benefit as a result of this federal service. Without the USDA inspection mark, importers cannot market their products in the United States. Federal inspection also adds to public confidence in the safety and wholesomeness of the product. A 1996 report by USDA’s Inspector General recommended that FSIS seek user fee authority for import inspections because they benefit specific identifiable individuals or firms by allowing entry to the U.S. market and enhancing public confidence through USDA’s stamp of inspection. In addition, charging user fees for FSIS inspection activities would reduce the inequities and inconsistencies that currently exist in charging for import activities. While FSIS does not charge for import inspections, APHIS charges a $27.50 fee per shipment to cover the cost of inspecting and issuing import permits for animal products such as horns, skins, and animal trophies. The arguments raised against user fees for import activities at FSIS are similar to those raised regarding inspection. Some argue that if industry paid for import inspections, the public might doubt the credibility of the inspections or believe inspectors faced a conflict of interest between facilitating an import entry and protecting public health. In addition, small brokers and import dealers may be economically hurt by new user fees and foreign countries may take reciprocal actions on U.S. imports or may view fees as a trade barrier. To be in compliance with the General Agreement on Tariffs and Trade, the United States could not charge for import inspections without charging for similar domestic inspections. Currently, domestic inspections are provided without charge. An import user fee, when prorated across all imported meat and poultry products, would not have a significant impact on consumers. For example, if a user fee had been charged for the 2.6 billion pounds of imported meat, poultry, and egg products FSIS reviewed in 1995 and the cost passed on to the consumer, prices would increase an average of about one-half cent per pound. In accordance with the Federal Seed Act of 1939, AMS conducts an enforcement program to ensure truthful labeling and fair competition in the seed industry. AMS conducts the program in cooperation with the states, each of which has a state seed law with jurisdiction over sellers within that state. To enforce the interstate provisions of the act, AMS has cooperative agreements with the states. About 500 state inspectors are authorized to inspect seeds subject to the act. Seed samples are routinely drawn by state inspectors to monitor seeds sold commercially. AMS received $1.17 million in fiscal year 1995 general fund appropriations for the federal seed program. No user fees were charged. States refer apparent infractions of the act to AMS for verification and action. Based on the results of tests and investigations, AMS attempts to resolve each case administratively by issuing “warning notices” or assessing penalties. For cases that cannot be resolved administratively, AMS will take appropriate legal action. In 1987, AMS proposed charging user fees to fund the cost associated with its seed program. AMS proposed charging each of the approximately 3,000 interstate seed shippers a license fee. The fee would be based on the dollar value of seed sold. Currently, 37 states charge user fees for their intrastate seed inspections. Those in favor of user fees for the federal seed program argue that AMS seed inspection activities benefit those involved in the interstate seed business by ensuring fair competition and increasing the confidence of buyers in the quality of the product. Seed sellers also benefit from the increased confidence that buyers have that the seeds they purchase are properly labeled. Those opposed to user fees for the program argue that seed inspections do not provide a special benefit to industry that would justify a user fee. The seed industry opposed the 1987 AMS user fee proposal, saying that the public benefited from the program, not just the industry. The industry continues to oppose user fees for this program because it believes that user fees are not appropriate for a mandatory regulatory program. A user fee to cover AMS’ fiscal year 1995 seed program funding of $1.17 million would have equated to about $423 per seed shipper. If the user fees were based on the dollar value of seed sold, the amount charged smaller dealers would be less than the average, while the amount charged larger dealers would be somewhat greater than the average. GIPSA is responsible for administering the provisions of the Packers and Stockyards Act of 1921. The act is aimed at ensuring fair business practices and competitive markets for livestock, meat, and poultry. In fiscal year 1995, GIPSA spent about $11.7 million without reimbursement on activities aimed at fostering fair and open competition, guarding against deceptive and fraudulent practices, and providing payment protection in the marketing of livestock, meat, and poultry. To accomplish these aims, GIPSA investigates fraudulent practices in livestock marketing, such as false weighing, manipulating weights and prices, switching of livestock, and misrepresenting the source, origin, and health of livestock. GIPSA also checks the accuracy of scales used for weighing livestock, meat, and poultry and monitors the operation of scales to ensure that weighing is done correctly. GIPSA’s payment protection provides for the livestock seller’s financial security by providing protection against a buyer’s default on payment of a contract. Since fiscal year 1995, GIPSA has proposed license fees to fund the cost of the Packers and Stockyards Administration. Under its proposal, all 24,125 packers, live poultry dealers, stockyard owners, market agencies and dealers registered with GIPSA would be charged a license fee. Currently, the Packers and Stockyards Act requires that (1) market agencies and dealers register with GIPSA and (2) slaughterhouses, processing packers, and poultry operations doing over $500,000 worth of business a year file an annual report with GIPSA. Packers, stockyard owners, and others who are subject to GIPSA’s regulation derive benefits because they are protected against deceptive and fraudulent practices in the marketing of livestock, meat, and poultry. In addition, livestock and poultry producers receive payment protection. Thus, according to Circular A-25, a user fee would be justified. AMS charges a similar fee for its activities under the Perishable Agricultural Commodities Act (PACA) of 1930. PACA promotes fair trading practices in the marketing of fresh and frozen fruits and vegetables. The act prohibits unfair and fraudulent practices in the industry and provides for dispute resolution outside the civil court system. Sellers must provide the quality and quantity of products specified in contracts, while buyers must accept and promptly pay for products received in accordance with the contract terms. The PACA program is funded primarily from license fees paid annually by approximately 15,000 buyers and sellers, including dealers, retailers, processors, and truckers. The amount each licensee pays is based on the number of branches and business facilities owned. Fees range from about $300 to $4,000. The National Cattlemen’s Association, an industry association that represents approximately 230,000 cattlemen, breeders, producers, and feeders, opposes the imposition of license fees to cover the costs of administering the Packers and Stockyards Act. The Association believes that the companies that would be responsible for paying these fees accrue no benefit from the program. Therefore, the Association argues that GIPSA’s licensing activities should be publicly funded. GIPSA has estimated that the annual licensing fees for a single business operation subject to regulation under the Packers and Stockyards Act would range from about $600 to $7,500. The amount of the fee would vary based on the size of the firm. If license fees were passed on to the farmer, rancher, or the consumer, the impact on meat and poultry prices would be negligible, according to a GIPSA official. FDA is responsible for, among other things, the safety of the nation’s domestically produced foods and animal drugs. The agency is also responsible for ensuring that foods imported into this country meet the same standards as domestic products. To carry out these responsibilities, FDA conducts inspections at domestic food and animal drug plants and at ports of entry. No user fees are charged for these services. FDA is responsible for ensuring the safety of all foods sold in interstate commerce—except meat, poultry, and eggs, which are regulated by USDA. FDA conducts a variety of regulatory compliance activities, including (1) monitoring the conditions under which food is manufactured, processed, packed, and stored by inspecting food establishments and products; (2) collecting and conducting laboratory analysis of food samples; and (3) investigating violations and initiating enforcement actions when appropriate. FDA also investigates USDA referrals of illegal drug residues found on meat and poultry and ensures that animal drugs are manufactured in accordance with established procedures. Domestic food-related compliance inspections were funded at about $42.35 million in fiscal year 1995, and domestic animal drug compliance activities received another $21.52 million; no user fees were charged. In a recent report, we found that based on its 1994 operating plan, FDA inspected food processing plants about once every 8 years. FDA inspects animal drug facilities about once every 2 years. Those favoring user fees for FDA’s regulatory oversight of food and animal drugs argue that these activities benefit firms in these industries (1) by ensuring the safety and effectiveness of their products, (2) increasing consumer confidence in their products, (3) reducing their exposure to liability, and (4) protecting them from unfair competition. A 1990 report by the Department of Health and Human Services’ Inspector General stated, “... user fees in the Food and Drug Administration, properly instituted, represent a legitimate method to recover regulatory costs. Such fees would be consistent with fee systems in other federal regulatory environments.” In a 1991 report, the Inspector General recommended that FDA collect an inspection user fee from all food firms, in part so that the frequency of inspections could be increased. On the other hand, those opposed to user fees for FDA’s food-related compliance activities argue that these activities do not provide specific benefits to industry but protect the public and, therefore, are not appropriate for user fees. In March 1995, an official representing the Grocery Manufacturers of America, a trade association whose membership comprises many of the largest food companies in America, testified before the Subcommittee on Agriculture, House Committee on Appropriations, that FDA inspection programs should continue to be funded through appropriations, not user fees. Specifically, according to the official, to require the food industry to pay for any form of government regulation intended strictly to benefit the public is tantamount to a food tax. The Animal Health Institute, which represents animal drug manufacturers, opposes user fees for FDA’s compliance activities related to animal drugs. According to an Institute official, the Institute does not support user fees for animal drug reviews, and any discussion of user fees for animal drugs must begin with improving the review and approval process before considering fees for other FDA activities related to animal drugs. Arguably, any benefits that industry may derive from FDA compliance inspections of food and animal drug firms are minimized by their infrequency. Furthermore, it is difficult to justify charging a user fee for infrequent FDA compliance inspections, so long as the meat, poultry, and egg industries receive continuous or daily FSIS inspections without charge. About 48,000 food firms are in FDA’s official inventory. To fully fund domestic food compliance inspections in fiscal year 1995, each firm in the official inventory would have had to pay, on average, a $900 annual fee. To lessen the burden on smaller firms, a fee schedule could be developed that would vary based on the size or economic value of the firm’s products. There are about 4,700 animal drug firms in FDA’s official establishment inventory. To fully fund animal drug compliance inspections in fiscal year 1995, each firm in the official inventory would have had to pay, on average, a $4,600 annual fee. To lessen the burden on smaller firms, a fee schedule could be developed that would vary based on the size or economic value of the firm’s products. All food products that are imported into the United States must meet the same standards as domestic products. For example, foods must be safe to eat and produced under sanitary conditions. However, rather than inspect each shipment of imported food, FDA chooses samples to inspect. Most food products are admitted into the United States without sampling. In fiscal year 1995, FDA spent about $38.78 million inspecting imported foods. No user fees were charged. In its fiscal year 1993 budget, FDA proposed charging about $60 million in user fees to fund inspections of imported products, including foods, drugs and medical devices. In justifying its user fee proposal, FDA argued that importers benefit from FDA’s activities through increased consumer confidence in their products. In fiscal years 1996 and 1997 FDA again proposed charging user fees to increase the effectiveness and efficiency of its regulatory compliance program for imported products. Approximately $15 million in user fees were proposed to be charged and used to help pay for a computer system which would improve the processing and monitoring of import entries. FDA argued that importers and brokers would benefit from the new system through faster turnaround times, elimination of large volumes of paperwork, and reduced costs of doing business. Neither of these proposals for import user fees were approved by the Congress. In rejecting the 1993 proposal, the Congress was concerned, among other things, about the impact on FDA’s operations if the expected user fee revenues did not materialize. In addition, opponents argue that user fees on imports could (1) add to the cost of food for consumers, (2) hinder food imports, which make up an increasing proportion of the U.S. food supply, (3) pose an unfair burden on small businesses, which import small lots of foods, and (4) be unfair if only those firms whose products are sampled by FDA had to pay a user fee. The Association of Food Industries, which represents nearly 200 companies in the food import business, is opposed to user fees for FDA’s import inspection activities because they are concerned that (1) fee revenues would not be spent exclusively on improving the computer system that supports import entries, (2) fees would rise each year and may continue indefinitely. Finally, as we mentioned earlier, to be in compliance with the General Agreement on Tariffs and Trade, the United States could not charge for the inspection of imported foods and food-related products without charging for similar domestic inspections. Currently, domestic inspections are provided without charge. FDA has stated that charging an import fee would be relatively simple. According to FDA, all entries would be charged, regardless of whether they were sampled, and a schedule could be established to take into account the range of values and size of the import lots. Charging user fees for FDA’s food import inspections should have a minimal impact on the cost of imported foods. FDA’s funding for food import inspections represented less than 0.2 percent of the value of food-related imports in fiscal year 1995. To identify and evaluate opportunities for increasing the share of program funding paid for by beneficiaries of food-related services, we identified (1) the types of food-related services provided by federal agencies, (2) the extent to which beneficiaries currently pay for such services through user fees, and (3) potential opportunities for recovering more of the service costs through user fees, as well as arguments for and against doing so. To identify the types of food-related services provided by federal agencies, we identified the principal federal agencies—Agricultural Marketing Service; Animal and Plant Health Inspection Service; Food and Drug Administration; Food Safety and Inspection Service; Grain Inspection, Packers and Stockyards Administration; and National Marine Fisheries Service—that provide food-related services. Some other agencies may also provide limited amounts of food-related services but we excluded them from our review. From each agency, we obtained and reviewed programmatic and budget information on the food-related services they provide. In addition, we met with agency officials to discuss the type of food-related services they provide and the beneficiaries of these services. We also reviewed information on food-related services from our previous reports and those of inspectors general. To identify the extent to which beneficiaries pay for food-related services through user fees we asked the agencies to provide user fee and appropriations funding data for their food-related activities. We did not verify the accuracy of these data. In addition, we obtained and reviewed the agencies’ budget documents and met with agency officials to discuss the degree to which program activities were funded through user fees. To identify potential opportunities for recovering more of the service costs through user fees, as well as arguments for and against doing so, we began by examining the programs that either charged partial user fees or no user fees. We then judgmentally selected those programs where there appeared, in comparison to other programs, to be inconsistencies in user charges or where there appeared to be private beneficiaries of the services. We did not review all of the food-related services at the six agencies for which user fees may be appropriate. We met with Office of Management and Budget and agency officials to discuss the agencies’ annual budget submissions and identify services that have been proposed as appropriate for user fees in the past. We also discussed with OMB officials their Circular A-25, which provided the principal criteria for identifying opportunities for charging user fees to beneficiaries of federal services. To identify the arguments in favor of and opposed to user fees, we met with agency officials and representatives of industry groups that would be affected by additional user fees. We also reviewed agency user fees proposals, congressional reports on agency appropriations, and industry position papers on user fees. Our work was conducted between April 1996 and January 1997 in accordance with generally accepted government auditing standards. Keith W. Oleson, Assistant Director Stephen D. Secrist, Project Leader Denis P. Dunphy LaSonya R. Roberts Jay L. Scott Jonathan M. Silverman Oliver H. Easterwood The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed opportunities to increase the share of funding by beneficiaries for food-related services provided by the federal government, focusing on: (1) the types of food-related services provided by the federal government; (2) the extent to which beneficiaries currently pay for such services through user fees; and (3) potential opportunities for recovering more of the service costs through user fees, as well as arguments for and against doing so. GAO noted that: (1) federal agencies provide individuals, firms, and industries such food-related services as: (a) premarket reviews, including approving new animal drugs and food additives for use and grading grain and other commodities for quality; (b) compliance inspections of meat and poultry and domestic foods and processing facilities to ensure adherence to safety regulations; (c) import inspections and export certifications to ensure that food products in international trade meet specified standards; and (d) standard setting and other support services essential to these functions; (2) about one-quarter of the $1.6 million spent by the federal government in fiscal year (FY) 1995 on food-related services was funded through user fees; (3) the premarket service of quality grading of grains and agricultural commodities was the primary food-related service funded through user fees; (4) nearly three-quarters of the cost of food-related services was funded by general fund appropriations rather than user fees; (5) compliance activities, such as the inspection of meat and poultry, were the primary food-related activities funded through general fund appropriations; (6) on the basis of its review of selected food-related services, GAO determined that potentially about $723 million in additional user fees could have been charged for services provided in FY 1995; (7) according to the Office of Management and Budget's criteria, additional user fees could have been assessed in three principal areas; (8) additional user fees could have been charged for some federal services by including the full costs of providing the service, such as the cost of setting standards, in the fee calculations; (9) user fees currently charged for certain food-related services, such as agricultural inspections at the nation's borders and ports of entry, could have been consistently applied to similar types of services that are provided without charge; (10) user fees could have been assessed on certain services, such as the inspection of meat and poultry, that are provided to identifiable beneficiaries without charge; and (11) although the arguments for and against user fees vary with the agency and service in question, the arguments center on who benefits from the service, the general public or specific beneficiaries, and the impact the user fee would have on producers or consumers.
The AOC’s authority to contract for goods and services is vested by statute in the agency head who has delegated this responsibility to the Chief Administrative Officer (CAO). The CAO has responsibility to, among other things, administer the procurement function on behalf of AOC. AMMD, which falls under the CAO, is authorized to enter into contracts on behalf of the agency. AMMD is the primary office responsible for developing contracting policies and procedures, appointing contracting officers, and awarding and overseeing contracts. Requirements for goods and services are identified by AOC’s operational units, which consist of various jurisdictions and offices that handle the day-to-day operations including the support, maintenance, and operations of buildings as well as construction and renovation projects. While AMMD has the primary responsibility of awarding and administering contracts, AMMD often works with the AOC’s jurisdictions and offices to assist in monitoring the progress of contracts awarded to support AOC’s various projects, such as the restoration of the Capitol Dome. From fiscal years 2011 through 2015, AOC obligated, on average, $326 million annually to procure goods and services. During the 5-year period, as figure 1 shows, the level of contracting actions has generally declined while obligations on contracts and orders varied. There was a substantial increase in obligations between fiscal years 2014 and 2015 when AOC awarded a contract to begin construction for the Cannon building renewal project. The vast majority of AOC’s spending to procure goods and services stems from the agency’s jurisdictions listed below in figure 2. Among the jurisdictions, the Capitol Power Plant and the House Office Buildings collectively accounted for 55 percent of AOC’s fiscal year 2015 contract obligations. As a legislative branch agency, AOC is subject to some but not all of the procurement laws applicable to government agencies. For example, both AOC and executive branch agencies are subject to the Buy American Act and the Contracts Disputes Act of 1978. Additionally, in some instances AOC has adopted certain procurement policies and regulations it would not otherwise be subject to. For example, although not subject to the Small Business Act, AOC worked with the Small Business Administration to establish a small business subcontracting and set-aside program to help the AOC more fully utilize small businesses. In addition, AOC has adopted certain characteristics and clauses of the FAR. For example, AOC incorporates FAR clauses related to contract changes, inspections, differing site conditions, availability of funds, and terminations. According to AOC officials, incorporating FAR clauses into AOC contracts offers significant benefits because the contract clauses have been drafted and reviewed by subject matter experts across the government and are familiar to government contractors. According to AOC officials, federal case law is usually available to address any contract interpretation issues. Our previous work has shown that acquisition planning, market research, and competition are key foundations for successful acquisition outcomes. Acquisition planning is the process by which agencies establish requirements and develop a plan to meet those requirements. Generally, project and contracting officials share responsibility for acquisition planning activities intended to produce a comprehensive plan for fulfilling the agency’s needs in a timely manner and at a reasonable cost. Our past work has found that acquisition planning is strengthened by documenting decisions to guide existing acquisitions and capturing important lessons and other considerations that can be used to inform future procurements. Market research is the process of collecting and analyzing data about capabilities in the market that could satisfy an agency’s needs. It is a critical step in informing decisions about how best to acquire goods and services. Effective market research can help agencies determine the availability of vendors to satisfy requirements and improve the government’s ability to negotiate fair and reasonable prices. Competition is the cornerstone of a sound acquisition process and a critical tool for achieving the best return on investment for taxpayers. Using full and open competitive procedures to award contracts means that all responsible contractors are permitted to submit offers. The benefits of competition in acquiring goods and services from the private sector are well established. Competitive contracts can help save taxpayer money, conserve scarce resources, improve contractor performance, curb fraud, and promote accountability for results. AOC developed a contracting manual to provide guidance for agency officials responsible for purchasing goods and services. The manual was implemented in April 2014 and includes guidelines on topics similar to those included in the FAR. AOC’s contracting manual outlines procedures and guidance for acquisition planning, market research, and competition. In general, for the 21 contracts and orders we reviewed, AOC officials implemented procedures related to these critical functions, such as documenting justifications for the use of noncompetitive procedures, in a manner consistent with the manual. AOC has identified competition as a key objective, and the agency tracks the number of sole-source awards and percentage of dollars obligated under sole-source awards. However, the agency conducts limited analysis of the factors driving the number of sole-source awards or the level of competition achieved across different product types. Such analysis could help identify where additional management attention may be needed to maximize competition to the fullest extent. In 2014, AOC issued a contracting manual that incorporates statutes and federal regulations applicable to the AOC, as well as internal policies in order to provide uniform policies across the agency and guidance to personnel. The AOC Inspector General had previously found that while AOC had developed procurement policies, orders, and procedures, they were not consolidated in one location, which made it difficult for AOC staff to access. The manual covers topics central to AOC day-to-day contracting functions, such as acquisition planning, market research, and competition, all of which we have previously found to be key aspects of a sound acquisition process. AOC started requiring written acquisition plans in August 2012, approximately 18 months prior to the publication of the contracting manual. Though AOC staff engaged in acquisition planning to inform procurement decisions before August 2012, plans were not consistently documented, according to contracting officers. Further, AMMD officials stated that another reason they started requiring acquisition plans was to help enforce acquisition timeframes agreed upon by the office that needed the acquisition and contracting officers. According to officials, the requiring offices consistently missed important deadlines, oftentimes resulting in lengthy acquisition cycles. As a result, AMMD implemented the requirement for written acquisition plans to help alleviate this problem. AMMD officials believe that requiring written acquisition plans has helped shorten acquisition timeframes. AOC developed a template to assist staff in preparing written acquisition plans which, in turn, helps to ensure key information is considered and captured for each acquisition. AMMD officials are considering options to revise the template staff use to document acquisition plans so that it is more adaptable to the specific circumstances of a procurement. As shown in table 1, the AOC manual shares some common acquisition planning principles with the FAR. On all the contracts and orders we reviewed, we found that the AOC conducted acquisition planning. AOC’s practices generally met the agency’s requirements for acquisition planning, including preparing written acquisition plans, addressing the prospects for competition, and involving the appropriate stakeholders in the planning process, among other things. Of the 21 contracts and orders that we reviewed, seven files required written acquisition plans, based on the dollar threshold outlined in the contracting manual, as well as timing of the requirement, and five of those seven files had written acquisition plans. For the remaining two files that required acquisition plans, AMMD officials cited an administrative oversight and a requirement to use a mandatory service provider as the reasons for not preparing a written acquisition plan. In addition, we found that two other files contained written acquisition plans even though they were not required. The contracting officer on one of those projects, the Refrigeration Plant Revitalization project, stated that while not required, a written acquisition plan was completed due to the cost, complexity, and visibility of the project. The AOC’s contracting manual requires that a written plan be completed well in advance of when an acquisition is needed but does not establish timeframes for how far in advance acquisition plans should be completed. AMMD officials noted that the nature and complexity of the acquisition—such as a new or recurring requirement—determines the extent of advance preparation needed to develop the acquisition plan. As a result, AOC did not establish specific timeframes in the contracting manual. As shown in table 2, AOC has implemented market research policies in its manual that shares some common principles with the FAR. In our review of AOC practices, we found that they generally met the requirements to conduct and document market research activity. We found that AOC employs a number of different ways of conducting market research that reflect what is in the contracting manual. For instance, AOC will often invite vendors to a potential construction worksite before publicizing a solicitation. This helps AOC identify potential qualified vendors and also allows vendors an opportunity to learn more about the requirement and determine if they want to make an offer on a project. We found that AOC held industry days for 5 of the 21 contracts and orders we reviewed for projects such as the Cannon Renewal project, the Dome Restoration project and the replacement of the skylight in the Hart Building, among others. Another example of market research that AOC performed was the use of a “sources sought” notification to determine the capabilities of the marketplace. For the 21 contracts and orders in our sample, we found that market research was documented in different ways. For instance, if a contract had an acquisition plan associated with it, the market research performed for that requirement would be documented in the acquisition plan. Additionally, we found contracting officers would document what research was performed and the results of those searches in memoranda contained in the contract files. AMMD officials stated that they are taking action to improve the quality of market research conducted, which is typically performed by the requiring office. AMMD plans to provide market research training in 2016 to enhance staff’s knowledge of how to conduct and document effective market research. AOC’s market research training is expected to focus on documenting market research, using a standardized template to capture the steps taken, and results of market research efforts. AOC’s contracting manual promotes full and open competition in the procurement process. Under full and open competition all responsible suppliers are provided an opportunity to compete for the agency’s contracts. AOC’s manual shares some common competition principles with the FAR as highlighted in table 3. Within our sample of contracts and orders, we found that AOC generally met its competition requirements as provided for in the agency’s contracting manual. Ten of the 21 contracts and orders we reviewed were competed and received more than one offer. In our previous work, we have reported that competitions that yield only one offer in response to a solicitation deprive agencies of the ability to consider alternative solutions in a reasoned and structured manner. All 11 of the non- competed contracts and orders we reviewed were awarded using non- competitive procedures based on exceptions cited in the AOC contracting manual. Specifically, Two contracts for janitorial services were awarded without full and open competition because of statutory provisions requiring that agencies use a list of specified providers for these services. Three task orders were awarded under base contracts that had originally been competed. In these three cases, since the original base contracts were awarded to only one vendor, any task order awarded under the base contracts is not required to be competed. Four contracts were awarded non-competitively because only one supplier was available. For example, when AOC was seeking to award a contract for the audio listening systems used as part of guided tours at the Capitol Visitor Center, AOC evaluated three vendors and determined that it was more cost effective and a better value to the government to maintain and replace the existing brand of listening devices instead of purchasing a new system. One contract was awarded non-competitively to develop continuity of operations plans in case of emergencies. The justification stated that open competition would publicly reveal sensitive information that could pose a security risk. As a result, AOC awarded the contract to a firm that had been used previously in order to limit the number of individuals with access to information on security risks and vulnerabilities. One contract to provide construction administration services, such as field observations, was awarded to the company that had designed and prepared all drawings and specifications for the project. The AOC believed that this company had the requisite technical expertise and therefore was in a unique position to provide the necessary evaluations and review of the documents. AOC has taken steps to gauge its effectiveness in implementing the agency’s policy to promote competition in the procurement process; however, currently it conducts limited analysis in this area. AOC leadership considers competition to be a key priority for the agency. The AOC contracting manual also emphasizes the importance of competition and recognizes market research as a means to evaluate competition. Our analysis of AOC procurement data showed that the agency competed approximately 50 percent of its contract obligations for the past 3 fiscal years—compared to 65 percent for the federal government overall. Federal internal control standards call for agencies to establish mechanisms to track and assess performance against their objectives. In addition, our prior work has shown that for policies and processes to be effective, they must be accompanied by controls and incentives to ensure they are translated into practice. The AOC began to collect competition data in fiscal year 2012. AOC has implemented mechanisms to track data on the number of non-competed awards and dollars obligated. In addition, AOC tracks competition levels across its organizational units as well as the agency’s use of allowable exceptions to competition. For example, AOC’s data shows that in fiscal year 2015, the primary basis for awarding noncompetitive contracts was the only one responsible source exception to competition—meaning that only one vendor could fulfill the requirement. While this is a good first step to gaining insight into the agency’s competition efforts, additional analyses could provide key information that highlights trends in AOC’s overall competition levels, the factors driving the use of the only one responsible source exception such as the quality of AOC’s market research, the types of goods and services that AOC is most successful in competing, and areas where focused attention may be needed. AOC officials did not dispute the value of further analyzing data about the agency’s competition efforts, but noted they have not previously identified the need to conduct analyses beyond their current efforts. Tracking competition data instills accountability at all levels and ensures that AOC leadership has the information readily available to make decisions rather than rely on ad hoc means. Routinely tracking its procurements at a more granular level—such as competition across goods and services—also would provide AOC leadership with important information to identify successful competition strategies that can be replicated across the agency and help the agency focus its resources to maximize competition. AOC uses various approaches to monitor contractors’ progress and work quality and address contractor performance, but does not have suspension and debarment procedures. AOC, like other agencies, primarily relies on contracting officers and COTRs who use oversight tools such as inspection reports and periodic progress meetings to monitor contracts. When AOC identifies contractor performance problems using these tools, AOC has a variety of approaches at its disposal to help address performance issues, such as providing written notice to the contractor highlighting the problem and seeking action to address the performance issue. If a contractor does not take action to improve performance, AOC may then invoke a number of contractual provisions including the collection of liquidated damages from the contractor. Although AOC has tools and resources at its disposal to manage and correct deficiencies on a contract-by-contract basis, AOC does not have a suspension and debarment process that allows it to exclude an individual or firm from receiving future AOC contracts. AOC uses a number of oversight tools to monitor contractor performance and protect the government against substandard work from the contractor. AOC’s monitoring approaches are generally applicable to all the agency’s projects. Depending on the type of project and severity of the deficiency, AOC may employ some or all approaches in any sequence it deems appropriate to seek immediate remedies or damages. As described below, across our sample of contracts and orders, we observed AOC’s use of a variety of approaches, including oversight tools, performance communications and some of the available contractual provisions to monitor and address contractor performance, as shown in figure 3. Tools identified by AOC officials to oversee contracts include onsite representatives, daily progress reports, inspection reports, and progress meetings, as described in table 4. These oversight tools can help AOC identify instances of poor workmanship, safety issues, or timeliness problems, among other things. the contractor at a progress meeting and requested a recovery plan. June-August 2014: AOC issued 2 letters of concern due to continued schedule delays and overall project management concerns. January 2015: AOC gave the contractor a negative interim performance rating related to schedule and management areas to emphasize the importance of the situation. resolved through routine communication, AOC may then issue a notice to comply to the contractor, which formally notifies a contractor that it is not complying with one or more contract provisions. Based on our review, these notices are generally issued by the COTR, lay out the specific performance concern or contract compliance issue, and request corrective action by the contractor within a specified time frame. AOC may issue multiple notices on the same matter before it is fully addressed. The notice to comply does not always indicate a performance problem but could also be issued for noncompliance with administrative contract requirements, such as failure to submit progress reports. The 53 notices to comply that we reviewed from our sample of contracts and orders typically addressed safety, work quality, or administrative contract compliance concerns. superintendent was replaced among other actions, and performance improved significantly, recovering lost time. October 2015: AOC gave the contractor a more favorable interim performance rating in these two areas in recognition of the improvement. Letter of Concern: If performance issues are not resolved through routine communication or notices to comply, AOC officials said the agency may then issue a letter of concern to a contractor. Based on our review, letters of concern are very similar to notices to comply, as they typically lay out a specific concern and request corrective action within a specified time frame. The main difference between a notice and letter is that letters are issued by the contracting officer instead of the COTR. The 27 AOC letters that we reviewed also addressed many of the same types of issues as notices to comply—safety, work quality, and personnel or schedule concerns. Contractor Performance Assessments: AOC routinely assesses contractor performance on an interim and final basis in government- wide contractor performance systems, and the ratings are available to other federal agencies through the Past Performance Information Retrieval System. In completing past performance evaluations, AOC officials rate the contractor on various elements such as the quality of the product or service delivered, schedule timeliness, and cost control. AOC officials said that contractor performance assessments are one of the most valuable methods available to incentivize a contractor to improve performance because a negative assessment could limit the contractor’s ability to be awarded future contracts from AOC or other federal agencies. AOC also has a variety of contractual provisions it can invoke if it determines that a contractor has failed to meet some or all of its contractual requirements. For example, certain provisions allow AOC to seek damages from poorly performing contractors. Contract Disputes: The Contract Disputes Act of 1978 outlines the process for resolving disputes between a contractor and the government. AOC policy calls for seeking an amicable resolution before invoking procedures identified in the Contract Disputes Act. When all attempts to settle the dispute amicably fail, AOC must issue a contracting officer’s final decision on the matter. All of the contracts we reviewed included the relevant contract clause that sets forth this process for resolving disputes. However, none of the contracts that we reviewed involved a dispute between the contractor and the government that required invoking the processes laid out by the disputes clause. Liquidated Damages: To protect itself from construction delays, the AOC contracting manual requires that all construction contracts valued over $50,000 include a liquidated damages clause. The liquidated damages clause provides that if the contractor fails to complete the work within the time specified in the contract, the contractor pays the government a daily fixed amount for each day of delay until the work is completed or accepted. According to its guidance, AOC generally determines the daily fixed amount based on the dollar value of the contract. For the 7 construction contracts in our sample that met the applicable threshold for liquidated damages, daily rates ranged from $200 a day to $28,201 a day. However, AOC had not invoked the clause for any of these contracts. Further, Congress recently enacted legislation prohibiting the AOC from using funds made available by the Consolidated Appropriations Act, 2016, to make incentive or award payments to contractors for work on contracts that are behind schedule or over budget, unless certain determinations are made. Termination for Default: When poor contractor performance cannot be corrected through other means, AOC may take additional steps and ultimately terminate the contract for default. AOC would start the process using either a cure notice or a show-cause notice. A cure notice provides the contractor typically at least 10 days to correct the issues identified in the notice or otherwise fulfill the requirements. A show-cause notice notifies the prime contractor that the AOC intends to terminate for default unless the contractor can show cause why they should not be terminated. Typically, a show-cause notice calls the contractor’s attention to the contractual liabilities, if the contract is terminated for default. None of the contracts in our sample resulted in a cure notice or show-cause notice; however, AOC officials said that these have been used in a couple of instances from fiscal years 2013 through 2015. For example, AOC issued a cure notice in 2013 to a contractor due to repeated poor quality control that delayed progress on the project. The cure notice followed repeated attempts by AOC to address the issues with the contractor through other methods, including issuing five letters of concern in the 6-month period leading up to the cure notice. AOC currently has no agency-wide process for suspending or debarring individuals or firms that the agency has determined lack the qualities that characterize a responsible contractor. In the absence of such a process, AOC does not have a mechanism that allows it to determine in advance of a particular procurement that an individual or firm lacks present responsibility and therefore should not receive AOC contracts. The FAR and the AOC contracting manual provide that contracts should be awarded only to individuals or firms that are responsible prospective contractors. A responsible contractor is one that has the financing, workforce, equipment, experience and other attributes needed to perform the contract successfully. Similar to executive branch agencies, contracting officers at AOC are required to review these factors prior to the award of any contract. In addition, contracting officers must review the excluded parties list in the governmentwide System for Award Management (SAM), which is maintained by the General Services Administration, to determine whether the contractor in line for an award has been suspended, debarred, or proposed for debarment by any other agency. A suspension temporarily disqualifies a contractor from federal contracting while a debarment excludes a contractor for a fixed period, generally up to 3 years. Although AOC officials must check the list of excluded parties in SAM, and as a matter of policy AOC declines to award contracts to excluded firms or individuals, AOC has no procedure for taking its own suspension or debarment actions or adding firms to the list of excluded parties. Our prior work has found that there are several agencies, like AOC, that lack an effective suspension and debarment process. In August 2011, we reported that six executive branch agencies had not taken any suspension or debarment actions within the past 5 years despite spending significant amounts of appropriated funds buying goods and services. By contrast, four other agencies had active suspension and debarment programs, and we identified three factors that these agencies had in common. First, these four agencies had detailed suspension and debarment policies and procedures. Second, they had identified specific staff responsible for the function. And third, they had an active process for referring matters that might lead to a suspension or debarment to the appropriate agency official. Consistent with the findings from our prior work, in a September 2012 management letter, the AOC Inspector General proposed that AOC develop a suspension and debarment process as a means to deal with “unscrupulous or ineffective contractors.” According to AOC officials, the agency declined to implement that recommendation, largely because without being subject to the FAR, AOC believed it could only debar contractors from doing business with AOC, and it was thought that the small number of actions anticipated would likely not justify the cost of developing a new process. However, we do not believe that this is a convincing reason. GAO, which is also a legislative branch agency, established a suspension and debarment process in 2012. For our process, we follow the policies and procedures on debarment and suspension contained in the FAR. Our process identifies new roles and responsibilities for existing offices and officials within the agency. As part of our process, we would report on the list of excluded parties, the names of all contractors we have debarred, suspended, or proposed for debarment. Although debarment, suspension, or proposed debarment of a contractor taken by GAO would have mandatory application only to GAO, listing a contractor on the excluded parties list provides an indication to other federal agencies that they need to thoroughly assess whether the contractor is sufficiently responsible to be solicited or awarded a contract. In addition, one of the advantages of a suspension and debarment process is that an agency can address issues of contractor responsibility and provide the agency and contractors with a formal process to follow. When we shared our experience with them, officials at AOC did not identify any reasons why a similar approach could not be taken at their agency. With more than half of AOC’s budget authority currently being spent on contracting, acquisition clearly plays a central role in achieving AOC’s mission. AOC has taken initial steps to establish an efficient and effective acquisition function by issuing the AOC contracting manual. The manual will help promote full and open competition in AOC’s procurement process. AOC is taking action to improve the quality of its market research which, in turn, can help enhance competition. The agency only recently started to collect competition data to inform its progress, but AOC is not fully using these data to determine the extent of its overall competition efforts and identify areas where additional focus is needed to ensure the agency is obtaining competition to the maximum extent possible. AOC is using several tools to provide oversight and hold contractors accountable; however, it lacks suspension and debarment processes that could help further protect the federal government’s interests. Given the high-profile nature of AOC’s mission, because of the congressional clients AOC serves, and the buildings it is responsible for, such a process would help to ensure that contracts are awarded only to responsible sources. Implementing policies and procedures for suspension and debarment would build upon AOC’s existing accountability framework and would further foster an environment that seeks to hold the entities they deal with accountable. To further enhance the acquisition function, we recommend that the Architect of the Capitol take the following two actions: Explore options for developing a more robust analysis of AOC’s competition levels, including areas such as the trends in competition over time, the use of market research to enhance competition, and the types of goods and services for which competition could be increased; Establish a process for suspensions and debarments that is suitable for the AOC’s mission and organizational structure, focusing on policies, staff responsibilities, and a referral process. We provided a draft of this report to AOC for review and comment. AOC provided written comments on the draft, which are reprinted in appendix II. AOC agreed with our findings, concurred with our recommendations and noted it is taking steps to implement them. We also received technical comments from AOC, which we incorporated throughout our report as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Architect of the Capitol. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. Our objectives were to assess (1) the extent to which AOC has developed and implemented acquisition policies and processes to guide its contracting function, and (2) the tools used by AOC to monitor and address contractor performance. To address these objectives, we used data from AOC’s financial management system to identify contracts and orders with obligations during fiscal years 2013 through 2015. We selected a non-generalizable sample of 21 contracts and orders, during this timeframe to obtain insights into AOC’s recent contracting practices. To narrow our focus on which contracts to include in our review, we identified contract actions for AOC’s largest and most complex projects, which the AOC defines as any project estimated to cost $50 million or more over the life of the project— the Cannon House Office Building Renewal Project, Cogeneration Plant Project, Capitol Dome Restoration Project, and the Refrigeration Plant Revitalization Project. As Table 2 below shows, the sample represents a mix of large and small dollar awards and types of products and services procured to support various projects across AOC. We excluded any transaction related to real estate rental or electric power payments. We assessed the reliability of AOC’s financial management system data by (1) reviewing existing information about the data and the system that produced them, and (2) comparing reported data to information from the contract files we sampled. Based on these steps, we determined that the data obtained from AOC’s financial management system were sufficiently reliable for the purposes of this review. To examine AOC’s policies that guide its acquisition function, we reviewed its contracting policies and procedures and compared them to what is outlined in the Federal Acquisition Regulation (FAR). While the FAR does not apply to the AOC, it reflects practices widely used throughout the executive branch of the federal government. We focused our review on competition, acquisition planning, and market research as our prior work has shown that these activities are critical to building a strong foundation for successful acquisition outcomes. We reviewed prior GAO reports to identify generally accepted contract management practices for market research, acquisition planning, and competition. We reviewed market research reports, acquisition plans, justifications and approvals for sole-source awards, solicitations, and independent government cost estimates for the contracts and orders in the sample. We analyzed these documents to determine the extent to which acquisition planning and market research was consistent with AOC’s guidance. To supplement information obtained from contract files within our sample, we met with contracting officers and contracting officer technical representatives to confirm our understanding of information in the contract files. We also interviewed officials from the Acquisition and Material Management Division on the policies and procedures that guide the acquisition function. To provide insights about the extent to which AOC competes contracts it awards, we used procurement data from AOC’s financial management system for fiscal years 2013 through 2015 to calculate its competition rate. Unlike other federal agencies, AOC does not report its procurement data to the Federal Procurement Data System-Next Generation (FPDS- NG), which is the government’s procurement database. To provide a basis of comparison, we calculated the governmentwide competition rate using data from FPDS-NG. For both AOC and governmentwide, we calculated the competition rate as the total dollars obligated annually on competitive contract actions as a percentage of total dollars obligated on all contract actions during fiscal years 2013 through 2015. This includes obligations on new contracts, orders, and modifications of existing contracts. Typically, FPDS-NG codes task and delivery orders from competitive single-award contracts as also being competed. In contrast, AOC classifies task and delivery orders derived from a competed single award contract as not competed even though the base contract was competed. In contrast, AOC classifies task and delivery orders derived from a competed single award contract as not competed because the orders are not available for competition, according to an AOC official. We adopted AOC’s classification of these orders as not competed. As a result, our determination of AOC’s competition rate may be understated. However, AOC and GAO officials agreed the difference is likely not substantial given the small number of single award contracts at AOC. We compared AOC’s efforts to assess its competition levels against acquisition best practices and Standards for Internal Control in the Federal Government, which call for continually tracking spending to gain insight about how resources are being used and using the information to assess how agency’s objectives are being achieved. To determine how the AOC oversees contractor performance, we reviewed the same sample of 21 contracts and orders, reviewed AOC project management guidance, and interviewed relevant officials. Specifically, we used the sample to gain insight into how AOC oversees contractor performance and resolves any disagreements that may arise during the performance of the contract. We reviewed documentation in the files such as relevant clauses, notices to comply, letters of concern, contractor performance reports, and other key documents used for monitoring and compliance purposes. We also reviewed AOC contracting policies and project management guidance on how the AOC monitors contractor performance. In addition, we reviewed prior GAO work to identify tools available to agencies to monitor and take actions to address or correct deficiencies regarding contractor performance. We also interviewed AOC contracting officials and contracting officer’s technical representative about their experiences in monitoring contractor performance. We interviewed officials from the Planning and Project Management division, contracting officers, and contracting officer technical representatives to understand how they ensure compliance with the terms of contracts and resolve disagreements that may arise. We reviewed AOC’s contracting procedures to determine whether AOC had a process in place to address contractor performance and ensure it engages with responsible contractors and used previous GAO work on suspension and debarment as the basis for assessing AOC’s efforts. We conducted this performance audit from April 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Candice Wright (Assistant Director); Emily Bond; Lorraine R. Ettaro; Victoria C. Klepacz; Julia Kennon; Katherine S. Lenane; Jose A. Ramos; Beth Reed Fritts; Roxanna Sun; and Alyssa Weir also made key contributions to this report.
The AOC is responsible for the maintenance, operation, and preservation of the buildings and grounds of the U.S. Capitol complex, which covers more than 17.4 million square feet in buildings and 587 acres of grounds. In fiscal year 2015, Congress appropriated $600.3 million to fund AOC's operations, over half of which was used to procure various goods and services ranging from large projects like the restoration of the Capitol Dome, to routine custodial services. GAO was asked to review the AOC's contracting practices. This report examines (1) the extent to which the AOC has developed and implemented acquisition policies and processes to guide its contracting function, and (2) the tools used by the AOC to monitor and address contractor performance. GAO reviewed the AOC's acquisition policies, interviewed contracting officials, and reviewed a non-generalizable sample of 21 contracts and task or delivery orders with dollars obligated in fiscal years 2013 through 2015. The sample consists of a mix of high-value contracts for goods and services. The Architect of the Capitol (AOC) recently implemented a contracting manual that centralizes current law and regulations applicable to the AOC, as well as policies, orders and procedures. As a legislative branch agency, the AOC is not subject to the Federal Acquisition Regulation (FAR) which governs executive branch agencies; however, its manual draws on the FAR and covers topics central to the AOC's day-to-day contracting functions, such as acquisition planning, market research, and competition, all of which are key aspects of a sound acquisition process. In the 21 contracts and task orders GAO reviewed, AOC officials generally followed the policies in the contracting manual related to these critical functions—such as documenting justifications for the use of noncompetitive procedures. The AOC began to collect competition data in fiscal year 2012, but the agency only conducts a limited assessment of its efforts to achieve competition. The AOC manual states it is agency policy to promote competition, and federal internal control standards state that agencies should establish mechanisms to track and assess performance against their objectives. While the AOC monitors data to track the number of sole-source contracts awarded, other analyses are limited. GAO's analysis of the AOC's data found that the agency competed approximately 50 percent of its contract obligations for the past 3 fiscal years—compared to 65 percent for the overall federal government. By examining the factors driving the number of sole-source awards or level of competition across different product types, AOC may be better positioned to identify where additional management attention may be needed to maximize competition. The AOC uses a variety of approaches to monitor contractor performance on its projects, with contracting officers and their technical representatives being the primary officials responsible for providing oversight. The AOC uses a number of methods to address contractor performance problems, as shown in the figure below. While the AOC has tools for addressing poor performance on specific contracts, it does not have a suspension and debarment process in place that could bar irresponsible contractors from working for the AOC or provide notice to other government agencies. Past GAO work has shown that having suspension and debarment procedures is critical to ensuring that the government only does business with responsible contractors. GAO recommends that AOC explore options for developing a more robust analysis of its competition levels and establish a suspension and debarment process suitable to its mission and structure. AOC agreed with GAO's findings and concurred with the two recommendations and noted it is taking steps to implement them.
The goals of the Recovery Act are to preserve and create jobs and promote economic recovery; to assist those most impacted by the recession; to provide investments needed to increase economic efficiency by spurring technological advances in science and health; to invest in transportation, environmental protection, and other infrastructure that will provide long- term economic benefits; and to stabilize state and local government budgets, in order to minimize and avoid reductions in essential services and counterproductive state and local tax increases. The EECBG program was authorized in the Energy Independence and Security Act of 2007 (EISA), which intended to move the United States toward greater energy independence and security and to increase the production of clean renewable fuels, among other things. The EECBG program was funded for the first time by the Recovery Act, through formula and competitive grants. Through the program, DOE allocates formula grants to the 50 states, the District of Columbia, and five territories; to city and county recipients based on their resident and commuter populations; and to Native American tribes based on population and climatic conditions. Applicants eligible for formula funding include cities or city-equivalent units of government, such as towns or villages with populations of at least 35,000; counties, which include county- equivalent units of local government, such as parishes or boroughs with populations of at least 200,000; and all Indian tribes and any Alaska Native village. A city or county is also eligible for direct funding if it is one of the 10 highest-populated cities or counties of the state in which it is located. The EECBG program has broad goals for energy-related outcomes. DOE encourages EECBG recipients to develop new and innovative approaches to meet the purposes of the program: to prioritize energy efficiency and conservation; develop projects in a cost-effective manner that will maximize benefits over time; stimulate the economy; leverage other public or private resources; promote energy market transformation; and to the extent possible, to develop programs that will provide sustainable, and measurable energy savings, job creation, and economic stimulus benefits that will continue beyond the funding period. DOE announced the funding opportunity for interested applicants to submit applications for EECBG formula grant funding on March 26, 2009. DOE required applicants to submit an Energy Efficiency and Conservation Strategy (EECS) that described their strategy for achieving the Recovery Act’s goals of the program. DOE had 120 days to review and approve or disapprove recipients’ EECSs. DOE’s funding announcement also required that recipients select projects from the 14 eligible activities identified in Section 544 of EISA, shown in table 1. The Recovery Act increased the importance of transparency and accountability in its use of funds. Accordingly, DOE requires grant recipients to report grant-level expenditure information and performance information including hours worked, energy cost savings, and percent of work completed as well as other figures, through a Web-based application called the Performance and Accountability for Grants in Energy (PAGE) system within 30 calendar days of the end of each quarter year. PAGE allows recipients to electronically submit and manage grant performance and financial information to DOE. In addition, grant recipients are required to report through www.FederalReporting.gov within 10 days after the end of each quarter. This information is made available to the general public through the Recovery.gov Web site. Grant recipients are using EECBG funds primarily for three activities: energy-efficiency retrofits, financial incentive programs, and building and facilities programs. However, recipients have reported several challenges that have delayed their efforts to obligate and spend these funds. Grant recipients have allocated most EECBG funds to 3 of the 14 activities that DOE designated as eligible for EECBG funding in accordance with EISA. As shown in table 2, recipients have allocated nearly two-thirds (65.1 percent) of EECBG funds for three types of activities: (1) energy-efficiency retrofits (36.8 percent), which includes activities such as grants to nonprofit organizations and governmental agencies for retrofitting existing facilities to improve energy efficiency; (2) financial incentive programs (18.5 percent), which includes activities such as rebates, subgrants, and revolving loans to promote energy-efficiency improvements; and (3) energy-efficiency and conservation programs for buildings and facilities (9.8 percent), which includes activities such as installing storm windows or solar hot water technology. Some EECBG recipients are using their awards to fund projects in all three of these categories. For example, according to information reported through PAGE, New York City plans to use $31 million of its $80.8 million award to fund energy-efficiency retrofits at municipal buildings such as schools, courthouses, police precincts, and firehouses. It has also allocated $16.1 million to a financial incentive program that will provide loans to capital-constrained building owners for energy-efficient retrofits to residential, commercial, or industrial buildings. New York City designated another $2 million for buildings and facilities projects and will fund a retro- commissioning program at city facilities designed to identify efficiency measures and address anomalies in energy use, equipment schedules, and control sequences that may cause energy waste. As indicated in table 2, these three activities account for 48 percent of all the projects funded through the EECBG program, or 3,674 out of 7,594 total projects. According to DOE officials, a number of factors explain why energy- efficiency retrofits, financial incentive programs, and buildings and facilities programs account for such a large portion of all EECBG-funded projects. DOE officials told us that some recipients had previously identified needed improvements to their buildings and facilities. Recipients also told us that EECBG funds allowed them to undertake planned facilities projects that previously lacked the requisite funding. DOE officials told us that other recipients allocated EECBG funds to these projects to save money on future energy bills and many recipients chose retrofit programs because these programs allowed them to use EECBG funds to engage their broader communities by retrofitting commercial and residential buildings, in addition to government facilities. DOE also encouraged recipients to pursue these projects. In the EECBG program’s funding announcement, DOE asked recipients to “prioritize energy efficiency and conservation first as the cheapest, cleanest, and fastest ways to meet energy demand” and to “develop programs and strategies that will continue beyond the funding period.” Energy efficiency retrofits and buildings and facilities projects meet both of these goals. Although financial incentive programs appear to be the second highest funded activity—receiving over 18 percent of EECBG funds—the data for this activity require further explanation. According to DOE officials, approximately 73 percent of activities classified as financial incentive programs are subgrants made by state governments to units of local government within the state. Such subgrant recipients can use these funds for any of the 14 eligible activities, such as lighting, retrofits, and transportation. The state awarding the subgrant can report details of the activities funded by subgrant recipients. However, the state government reports these details in narrative fields within DOE’s PAGE system while the primary activity type, reflected in table 2, is simply classified as “financial incentive programs.” High-level summary data on these activities may give the impression that over 18 percent of EECBG funds are allocated to financial incentive programs however, nearly three-quarters of these funds may ultimately be used for any of the 14 eligible activities. Although DOE collects information on how these funds are ultimately used, these data are not readily available. DOE has obligated all EECBG funds to recipients, and recipients are beginning to obligate and spend these funds. The Recovery Act required DOE to obligate all funds to recipients by September 30, 2010, and DOE has done so. DOE staff told us that recipients have completed the planning stages of their projects and they expect that recipient spending will soon hit a peak before leveling off as funds are expended. As of December 2010, recipients reported obligating approximately $1.7 billion, 57 percent of their EECBG budgets, and reported spending more than $655 million, approximately 23 percent of their EECBG budgets. DOE officials expect recipients’ spending to increase significantly in forthcoming reporting periods as work begins or increases on more projects. The table below shows the total funds budgeted, obligated, and spent and the percentages of budgeted funds spent for each eligible activity. Some recipients and others have identified several challenges that have delayed spending of Recovery Act funds under this newly funded EECBG program. DOE has made efforts to help recipients address some of these challenges, including launching a Technical Assistance Program and Solution Center to provide recipients with one-on-one assistance, an online resource library, training, webcasts, and a peer-exchange forum for sharing best practices and lessons learned. Because the EECBG program is relatively new—authorized by EISA in 2007 but not funded until the Recovery Act was passed in 2009—some DOE administrators had little previous experience with the program and its requirements. DOE’s Inspector General reported that some of the DOE staff assigned to review EECBG grant applications lacked financial assistance experience and failed to obtain the information necessary to issue awards, which required additional requests for documentation that further delayed awards. The Inspector General also reported that the program lacked a permanent Program Director until April 2010. Some EECBG project officers—the DOE staff primarily responsible for overseeing and interacting with EECBG recipients—told us that they faced a steep learning curve during the initial months of the program, when they began working with recipients to resolve obstacles to applying for funds and address questions about meeting requirements and reporting outcomes. Several project officers compared managing the EECBG program to flying a plane while it is still being built. In addition, several DOE project officers told us that some recipients’ efforts to effectively manage grants and spend funds have been complicated by staff and resource limitations. Some recipients lack the staff and resources needed to comply with EECBG and Recovery Act requirements. For example, a project officer told us that one county has only two staff members who were entirely responsible for managing the grant and meeting reporting requirements, in addition to their regular workload. The economic downturn exacerbated some recipients’ staffing challenges as budget shortfalls led to furloughs and hiring freezes. For example, one county reported to DOE that staffing shortages due to budget cuts had delayed its planned retrofit projects. Another recipient reported project delays due to a furlough that closed the city government and prevented the city council from approving their plans for a financial incentive program. Some recipients also told us that they experienced local jurisdictional requirements that delayed their ability to spend Recovery Act funds. DOE’s Inspector General reported that local budget and procurement requirements prevented some recipients from obligating funds until DOE made the entire award amount available. In addition, several project officers told us that some recipients cannot initiate their proposed EECBG-funded projects until their spending decisions and budgets are approved by local officials, which can delay projects and spending for months or even longer in localities where local officials only meet quarterly or twice a year. In response to questions about spending delays, one recipient told us that although local procedures can be time- consuming, these procedures also protect tax dollars. Another recipient told us that DOE needs to take local procedures into consideration so that spending milestones are more flexible and realistic. Both representatives from NACo told us that although recipients are grateful for the opportunity to implement critical projects that they previously could not have funded, DOE has not adapted guidance and deadlines to the needs, timelines, or procedures of local governments and that this has created some challenges. Some project officers expressed similar views in our meetings with them, stating that the federal government in general lacked an appreciation of city and county government processes. Both USCM representatives also told us that although DOE’s guidance and support had improved significantly, this lack of understanding of how city governments worked had a negative impact on the success of the program. Additionally, some recipients told us that meeting the reporting requirements for EECBG Recovery Act funds is time-intensive and that requiring recipients to submit similar information through PAGE and FederalReporting.gov makes the reporting process unnecessarily duplicative. For example, one recipient told us that the required reporting for EECBG takes two to three times longer than other federal grants. Some recipients told us that the EECBG program’s reporting requirements were more cumbersome than other federal grant programs. One recipient with decades of federal grant experience told DOE’s Recovery Act help line that although DOE staff had been very helpful in providing information, the reporting requirements for EECBG Recovery Act funds were the most onerous he had experienced in 20 years of government work despite regularly applying for millions of dollars in federal grants. Another recipient told us that his city canceled one of its planned projects, a geothermal system, because the reporting requirements would have been too burdensome. Similarly, project officers told us that some small recipients were so overwhelmed with the reporting requirements that they declined their awards. However, the vast majority of recipients accepted their awards. Some recipients have also faced challenges in acquiring needed materials and products in a timely manner. EECBG recipients have created a large demand for energy-efficient materials. As a result, some of the materials and products needed to complete projects are out of stock or on back order, which can delay the implementation of these projects and the spending of funds allocated for these projects. For example, project officers told us that shortages of lighting and heating, ventilation, and air conditioning (HVAC) systems products have delayed some of the projects requiring these items. Both NASEO representatives told us that shortages were also an issue for solar and energy-efficient lighting projects. When DOE announced funding opportunities through the EECBG program in March 2009, it stated that recipients must obligate all funds within 18 months of their effective award date and spend all funds within 36 months of their effective award date. These original time frames require recipients that were awarded grants in fall 2009—the majority of recipients—to obligate 100 percent of their funds by spring 2011 and to spend these funds by fall 2012. However, in April 2010 DOE determined that many recipients were not on a trajectory to obligate and spend all of their funds within this time frame. DOE sent letters to all EECBG recipients outlining new obligation and spending milestones in an effort to increase obligating and spending rates among recipients and ensure that all funds are spent before the 36-month deadline. DOE’s new milestones encouraged recipients to obligate 90 percent of their funds by June 25, 2010, spend 20 percent of their funds by September 30, 2010, and spend 50 percent of their funds by June 30, 2011. Officials from the Office of Weatherization and Intergovernmental Programs (OWIP), the DOE office which manages the EECBG program, told us that DOE and the Administration expressed an urgency to spend funds quickly, thereby creating jobs and stimulating the economy— primary purposes of the Recovery Act. These OWIP officials told us that many recipients found the milestone letters useful to facilitate local procurement processes and overcome other barriers to obligations and payments. DOE initiated Operation Clear Path to meet the September 2010 spending milestone by contacting 600 targeted recipients through telephone calls and helping these recipients develop strategies and tactics to accelerate their spending. An internal DOE newsletter reported that Operation Clear Path was yielding real gains by reaching out to recipients of grants over $3 million. DOE cited recent spending increases among targeted recipients as evidence that this approach was succeeding. For example, one city moved $600,000 from a revolving loan fund to a lighting retrofit, bringing the city’s spending up to 31 percent. Additional examples given from other cities include $10 million spent to capitalize a revolving loan fund and $4 million spent on a lighting equipment purchase. However, many recipients have had difficulty meeting these new milestones. According to DOE’s data, about 41 percent of recipients met DOE’s new milestone of spending 20 percent of EECBG funds by September 30, 2010 (see table 4). Furthermore, some project officers told us that DOE’s new spending and obligation milestones confused recipients. These project officers stated that some recipients were concerned about the consequences they would face for not being able to meet the new milestones. Although DOE officials told us that there were no repercussions for recipients that failed to meet these milestones, some project officers told us that some recipients did not understand this and were concerned that they might lose their funding. In some cases, project officers told us that they too were unsure of the consequences of recipients’ failing to meet these milestones. In addition, some representatives from NASEO told us that these new milestones were not consistent with the timelines set in the terms and conditions of award agreements that DOE had already approved. Some project officers told us they are concerned that DOE is sending conflicting messages by encouraging recipients to spend funds more quickly than the time frame recipients had agreed to in the terms of their grants. Further, DOE’s Inspector General reported in August 2010 that DOE’s new obligating and spending milestones “may increase the risks associated with ensuring compliance with regulatory requirements… as well as, maintaining effective financial control over the expenditure of funds.” DOE has initiated a second round of telephone calls to targeted recipients, some of which may have been contacted during the first round of follow- up telephone calls, in an effort to increase spending to meet DOE’s new milestone of spending 50 percent of EECBG funds by June 30, 2011. DOE staff continue to work with recipients and project officers to share best practices, overcome challenges, and ensure that the EECBG program advances the goals of energy efficiency and job creation. However, it is still unclear whether this effort will overcome the challenges recipients face in obligating and spending EECBG funds and meeting DOE’s new milestones. Both DOE and recipients are taking a variety of actions to provide oversight of EECBG funds, with some recipients providing much more rigorous oversight than others. DOE and recipients reported having experienced technical, staffing, and expertise challenges that hinder their ability to meet Recovery Act and program requirements. DOE is taking steps to address many of these challenges. DOE and recipients are using a variety of oversight actions for EECBG funds such as program office monitoring and oversight by the DOE Chief Financial Officer (CFO) and the Inspector General. Recipients are also providing oversight, and the level of this oversight may vary by the recipient’s resources and the nature of the project. DOE’s framework for oversight is its programwide monitoring plan that was issued in August 2009 and revised in March and June 2010. According to DOE documentation, DOE developed the plan to, among other things, provide a structure for oversight of recipients’ procedures and processes, ensure consistent application of program and reporting standards, and provide clear and transparent guidelines. As outlined in the monitoring plan, OWIP oversees the administration and oversight of EECBG funds. The office plans and budgets programmatic requirements and resources, develops standardized monitoring practices, provides expertise to review and analyze performance measures, and helps provide guidance and technical expertise to meet programmatic requirements, among other activities. Specifically, OWIP has issued guidance to help recipients and DOE program staff meet Recovery Act and program requirements such as Buy American, Davis-Bacon, and monitoring requirements, as well as to help to ensure that funds are spent efficiently. OWIP also develops and hosts Web seminars on specific topics such as designing retrofit and appliance rebate programs. To monitor recipients’ use of funds, DOE project officers act as the primary support to recipients, as well as liaisons between recipients and DOE. DOE’s monitoring plan directs project officers to gather key program data from recipients, provide information on training and technical assistance opportunities, and coordinate monitoring activities. In addition to project officers, DOE also uses other monitoring staff such as technical monitors, contract specialists, and staff accountants. The plan and guidance also identify goals and actions for three primary components of oversight and monitoring: (1) desktop monitoring, (2) on-site monitoring, and (3) worksite visits. Specifically: Desktop monitoring: According to DOE, all EECBG grant awards are to be reviewed through quarterly desktop monitoring—remote monitoring conducted by the project officer. This includes examining recipients’ financial and other reports to assess progress and determine compliance with federal requirements, as well as examining recipients’ planned goals and objectives and the reporting and tracking of resources expended by the recipient and subrecipients. DOE also directs project officers and other monitors to use information gained from desktop monitoring for further monitoring and potential corrective action. On-site monitoring: DOE conducts periodic on-site monitoring of grant recipients. On-site monitoring primarily occurs at the recipient and contractor levels and can also include field inspections on projects that have achieved milestones since the previous visit. Prior to on-site monitoring, monitors also review the previous findings of other DOE project officers and monitoring staff to determine deficiencies and to determine what steps are being taken to resolve the issues. During on-site monitoring, project officers review what specific internal controls recipients have in place, such as segregation of duties, accounting standards and practices, and payment procedures. Monitoring staff also interview contractors to determine if follow-up procedures were used and deficiencies were corrected. Project worksite reviews: DOE monitoring staff also visit some worksites during on-site monitoring to review the progress of activities at the project, facility, or building being completed. Staff also review the quality of work performed and compliance with federal requirements, such as the Buy American provision. DOE officials stated that worksite reviews were conducted as determined necessary by the project officer, though project officers made an effort to visit at least one worksite during every on-site visit. Overall, DOE officials stated that they planned to prioritize their monitoring based on the size of the grant rather than on the expected risk of the project. For example, all recipients with over $2 million in Recovery Act funds—which as a group, represent about 70 percent of all EECBG Recovery Act formula funds—receive significantly more on-site monitoring than would recipients of grants of under $250,000. Table 5 shows the frequency of DOE’s planned monitoring activities. To conduct and track monitoring activities, DOE monitoring staff make extensive use of a Web-based application, the PAGE system. For example, DOE monitoring staff use PAGE data to inform and target further monitoring actions such as on-site monitoring. Additionally, DOE has also created a monitoring tool within PAGE that compiles the checklists used by project officers to determine the compliance of recipients with the program’s statutes and requirements. DOE guidance states that through the use of PAGE, monitoring staff can target recipients with low spending rates or reporting delays for additional follow-up to help ensure that funds were spent in a timely manner. Project officers also stated that they use financial data reported in PAGE to identify potential areas of concern for further analysis. Overall, DOE officials stated that they have almost met their desktop monitoring goals with a rate of over 90 percent of their planned monitoring complete, and based on their current rate, are on track to meet on-site monitoring goals (see table 6). DOE officials stated that they do not currently track the number of worksite reviews, but that all initial on-site monitoring visits also include a worksite review. In addition to desktop monitoring and site visits, several DOE officials reported almost meeting their planned staffing goals for project officers and other monitoring staff. Specifically, as of December 20, 2010, these DOE officials reported that approximately 60 project officers, 20 contract specialists, and 35 full-time equivalents (FTE) of contractor support were assigned to administer the EECBG program. These staff members are located in Washington, D.C.; Oak Ridge, Tennessee; Golden, Colorado; and Las Vegas, Nevada. As of December 2010, DOE officials stated they had met their staffing goals with the exception of one field office, which had recently lost several of its staff. These DOE officials said they hoped to fill those positions soon. DOE monitoring also includes specific financial management oversight in DOE field offices, as well as independent oversight through DOE’s Inspector General. Specifically, financial management oversight is conducted by the DOE Office of the CFO in the Golden, Colorado, and Oak Ridge, Tennessee, field offices. In addition, the project officers in the Las Vegas, Nevada, field office also oversee disbursements of DOE funds. Data on the disbursement of funds are then made available to DOE employees, including headquarters employees and the CFO. DOE’s Inspector General also provides oversight of the EECBG program and has issued a report on the status of implementation of EECBG, as well as the management controls over the development and implementation of PAGE. Along with issuing such reports, DOE’s Inspector General takes actions to investigate allegations related to fraud, waste, and abuse for the EECBG program. To date, DOE’s Inspector General has received four complaints regarding the use of EECBG funds and is handling two of them through ongoing investigations, while the other two were sent back to the program office for resolution. A DOE Inspector General official stated that the complaints are related to potential conflicts of interest and potential ethics violations. Additionally, while not investigating EECBG directly, DOE’s Inspector General recently issued a management alert related to DOE’s State Energy Efficient Appliance Rebate Program. Some recipients also used EECBG funds to help fund rebate programs. In its alert, DOE’s Inspector General identified an incident in which a consumer in Georgia was able to purchase water heaters at a store, return them, and still inappropriately apply for and receive a rebate through the state’s rebate program. DOE’s Inspector General report noted that the incident showed that if similar process vulnerabilities exist in other jurisdictions, then the rebate program could be exposed to abusive practices on a broad scale. The extent of recipients’ oversight through the monitoring of Recovery Act funds may depend on the individual recipient’s resources and the nature of the project. Broadly stated, federal law, federal regulations, and DOE guidance require that recipients comply with applicable laws, including provisions in the Recovery Act, such as Buy American, and other federal requirements, including regulatory and procedural requirements. Recipients are responsible for informing subrecipients of all applicable laws and other federal requirements and ensuring their subrecipients’ compliance with program, fiscal, and audit requirements. The Single Audit Act, as amended, requires recipients passing on the funds to subrecipients to monitor the subrecipients’ use of the federal funds through site visits, limited scope audits, and other means. DOE did not require recipients to conduct any specific oversight actions but instead released guidance on October 26, 2010, identifying a number of best practices for monitoring by recipients to ensure their and their subrecipients’ compliance with federal requirements for monitoring. Further, DOE has also provided additional resources regarding the monitoring of subrecipients, such as workshops and guidance. We found that recipients contacted are using various methods to meet applicable laws and other federal requirements, and recipients, DOE officials, and project officers stated that some recipients are providing substantially more rigorous monitoring than others. For example, while it is required that recipients monitor subrecipients, in response to our questions, one recipient did not indicate that it monitored subrecipients and instead stated that it included terms in their contracts that required contractor adherence to applicable federal requirements. Other recipients reported actively monitoring funds but using relatively limited techniques such as reviewing invoices and other reports submitted by vendors. DOE officials acknowledged that many recipients are resource constrained, limiting their ability to monitor and ensure compliance with applicable federal requirements, and DOE would need to evaluate on a case-by-case basis whether the grant recipient’s subrecipient monitoring system is sufficient to meet compliance requirements. However, officials stated that DOE does not gather specific information on recipient monitoring practices except during on-site visits. Additionally, because less than 25 percent of grants under $1 million are to receive on-site visits, DOE does not have specific information on monitoring for many recipients. DOE officials stated that they focused monitoring efforts on recipients of larger grants because they were more likely to have subrecipients. DOE officials also said that they adopted standard, well-accepted audit sampling practices to achieve the most effective coverage in verifying monitoring practices and conducted additional on-site reviews of recipients that have demonstrated cause for concern such as not filing quarterly reports. Officials stated that if they identified a concern during an on-site visit, they would work with the recipient to develop an alternative system to monitor subrecipients. While some recipients reported less rigorous monitoring activities, in other instances, recipients reported having detailed monitoring practices with multiple components. For example, one recipient reported collecting monthly progress reports, which included financial reports, from all subrecipients, vendors, and contractors, as well as conducting site visits. Additionally, the recipient reported that their auditor and grant services staff also conducted internal audits of all Recovery Act programs. Another recipient also reported using a variety of monitoring practices such as inspections, site visits, and regular billing of vendors. The recipient’s staff also reported that they planned to use third-party oversight of site visits in the coming year and that they had recently hired a grant compliance analyst who would be administering a subrecipient monitoring plan in the coming year. Some grant sizes and project types may not lend themselves to the same level of oversight as others. DOE project officers stated that some larger projects included specific monitoring requirements in the contract to help ensure that the project met key goals including achieving project outcomes and ensuring that funds were used efficiently. For example, project officers stated that some recipients were able to use expertise provided by contractors that perform energy-savings work to generate energy-cost savings to help determine estimated versus achieved energy savings. Other smaller contracts, however, were not large enough to include these types of oversight. In another instance, one recipient noted that because the grants with subrecipients they were using were so small, ranging from $8,000 to $55,000, that the recipient did not see the need to conduct internal and financial audits but did report conducting on-site visits on some projects. DOE officials and some recipients reported experiencing technical, staffing, and expertise challenges that hinder the ability of both DOE and recipients to provide oversight to ensure that Recovery Act and program requirements are met, but DOE is taking steps to address many of these challenges. Some DOE project officers and recipients reported experiencing technical challenges using PAGE. For example, project officers told us that monitoring was challenging because of the extensive amount of time that they spent helping recipients use the PAGE system, which recipients found cumbersome and difficult to use. In particular, some project officers said that they provide ongoing support during reporting cycles to help fix reporting errors and, in some cases, help walk recipients through the reporting process and help fix reporting errors. For example, some DOE project officers in one field office told us that during the reporting period—the last week of the quarter and the following 2 months—project officers spent approximately 70 to 85 percent of their time helping recipients fill out their PAGE and federal reporting forms. Other project officers with the same office also told us that recipients often cannot figure out how to enter data into PAGE and do not understand the reporting system. In another field office, a DOE project officer stated that almost all of the initial PAGE reports for each quarter were initially rejected for issues such as missing or incorrect data entries and that recipients reported having difficulties with entering data in every field. Some recipients have also reported difficulties using PAGE—for example, though not asked about PAGE issues specifically in our structured questions, one recipient stated in a response regarding challenges faced that “PAGE reporting is the biggest challenge” while another recipient noted that “technical glitches” in PAGE made reporting more challenging. Incorrect data entered into PAGE has also limited project officers’ ability to monitor recipients. For example, some project officers stated that some recipients continued to enter incorrect information in PAGE for data fields such as total hours worked and estimated energy savings—key project measures used to help gauge project progress and status—in part due to difficulties recipients had understanding PAGE data fields. Specifically, some project officers stated that some PAGE data field names were difficult to interpret, confusing some recipients. Some project officers also stated that recipients entered incorrect job information using incorrect job calculation formulas or that they entered incorrect energy data and that DOE project officers had to spend time locating the errors and working with the recipient to ensure that the correct data were entered. In previous work looking at EECBG, as well as the State Energy Program and Weatherization Assistance Program, we have also reported on both DOE monitoring staff and recipients having problems with PAGE that hindered DOE’s ability to administer the program. Specifically, we previously reported that some DOE staff and recipients noted that the time and effort that project officers spent with recipients to help them understand and navigate the PAGE system significantly increased the administrative burden of the program. In reviewing PAGE’s development process, DOE’s Inspector General found significant issues, noting that DOE did not seek input from grant recipients in designing PAGE because it had to be developed and implemented so quickly. The Inspector General’s report noted that user input significantly increases the likelihood that the system will meet user needs and also helps avoid rework costs due to a lack of functionality. Similarly, we previously reported on the importance of eliciting the users’ needs to identify and prioritize system requirements. Without gathering this information, programs and systems may not meet the needs of their users. DOE officials stated that they have implemented a number of improvements to both the administration of PAGE, and the system itself, to help address user concerns and that they are currently implementing more changes. For example, DOE established a PAGE hotline to assist users and gather users’ feedback and also provided training videos for recipients using PAGE. DOE officials also stated that they recently implemented a new software tool to gather and prioritize user feedback from large numbers of recipients to better address users’ major concerns. Additionally, DOE officials said that among other improvements, they plan to incorporate a new interface in the next few months for DOE staff to review data reported in PAGE, which was designed to improve usability of the system. DOE officials stated that these improvements will take time to implement and have to be carefully considered so as not to negatively affect other PAGE functions. For example, PAGE is also used for recipient reporting by DOE’s Weatherization Assistance and State Energy programs. DOE officials also stated that they have had to adapt their process to handle the large number of PAGE users, in contrast to the previous system. Specifically, DOE officials stated that approximately 50 state users and a limited number of federal staff used the previous system, while PAGE had 4,292 recipient and 411 federal users as of December 8, 2010. Some DOE officials, monitoring staff, and grant recipients, as well as all stakeholders we contacted, stated that some recipients are also encountering staffing and expertise challenges that have limited their ability to monitor the use of Recovery Act funds and ensure effective and efficient use of funds while also meeting Recovery Act and other federal requirements. As noted previously, some project officers stated that some recipients may lack sufficient resources and staffing for program oversight such as monitoring. Some DOE officials also acknowledged the significant recipient workload associated with monitoring. These officials stated that the biggest overall challenge to monitoring was the need to determine the right amount of information to collect from recipients so as to ensure recipient compliance with applicable federal requirements while not burdening recipients with reporting requirements. These officials also stated that they took steps to decrease the reporting burden on recipients, including decreasing certain monitoring requirements. Further, these DOE officials also stated that some recipients said that they might have trouble ensuring compliance with program requirements due to limited staffing and were focused on Recovery Act requirements such as Davis-Bacon and Buy American. Finally, the NASEO and USCM representatives we spoke with stated that some recipients have expressed concerns about complying with Davis-Bacon requirements, especially with limited staff and a large number of projects. For example, both USCM representatives stated that while some city and county projects already had Davis-Bacon or Buy American requirements before the Recovery Act, they were typically on much larger projects in excess of $30 million, such as transportation projects, with dedicated resources to ensure compliance with these requirements. Some DOE monitoring staff and the NASEO and USCM representatives stated that some recipients found compliance with federal laws, including Recovery Act provisions, such as those for financial management monitoring and Davis-Bacon, difficult because of a lack of previous expertise with those requirements. Additionally, some DOE monitoring staff stated that some recipients have limited experience with federal grants and faced a steep learning curve with implementing projects, and that this inexperience has limited monitoring efforts, making it more difficult to ensure that funds were spent properly. For example, several DOE staff in one field office stated that, in particular, those recipients that did not have previous experience performing a Single Audit Act report tended to have more trouble meeting EECBG requirements than others due to inexperience. Additionally, some DOE project officers also stated that some smaller recipients with fewer staff did not have specific expertise in energy management and that this made it more difficult to monitor programs. Further, these DOE project officers also noted that certain project types, such as energy loan programs, may have fewer tangible outcomes than others such as building retrofits, and are thus more difficult to monitor, especially without specific staff expertise. Additionally, certain Recovery Act requirements can also prove more difficult to ensure compliance with than others. For example, both NASEO representatives stated that some recipients found compliance with the Buy American provision difficult, especially for items with multiple components such as air conditioners. DOE is taking steps to address staffing and expertise challenges by expanding support to recipients. Some DOE officials and the NASEO and USCM representatives stated that DOE’s administration of EECBG was initially limited during the program’s early stages and that this hindered early program administration including oversight. Overall, the NASEO and USCM representatives stated that while the DOE program office has improved significantly in providing support such as guidance to recipients, the program was significantly impacted by these early delays. DOE has gradually expanded the amount of support provided to recipients. For example, while DOE project officers continue to provide individualized support to recipients, DOE has also developed a Technical Assistance and Solution Center to provide information related to monitoring recipient activities such as project implementation. Through the Technical Assistance and Solution Center, DOE has provided near daily Web seminars on specific energy topics while also helping connect recipients with specific technical expertise. Further, DOE continues to issue programwide guidance to help recipients comply with Recovery Act and program requirements. For example, on January 4, 2011, DOE issued guidance to help recipients determine the eligibility of recipient programs for Recovery Act funds. DOE has also hosted seminars with recipients and subrecipients to help identify and share monitoring best practices. Finally, DOE is working with the Office of Management and Budget (OMB) to update OMB’s Compliance Supplement to its guidance for the Single Audit Act (Cir. No. A-133) regarding EECBG program requirements. EECBG program recipients reported using EECBG grant funding to develop projects designed to achieve a variety of benefits in line with Recovery Act and program goals, including reducing total energy use and increasing energy savings for local governments and residents. For example, some recipients we contacted reported anticipating energy savings and reduced overall energy usage from such projects as powering the electrical needs for a large city park with solar panels; maximizing the use of day lighting in government buildings; installing more energy- efficient technology in households and businesses; replacing convention center light fixtures with light-emitting diode (LED) bulbs to reduce energy usage; and updating and remodeling a 40-year-old public building with more energy-efficient products, including new windows and doors, and HVAC systems. Furthermore, DOE officials told us that some recipients are already reporting achieving benefits consistent with Recovery Act and program goals. For example, DOE reported that according to initial recipient self- reporting through December 2010, EECBG recipients have upgraded more than 10,000 buildings, installed 40,000 efficient street lights, and upgraded more than 100,000 traffic signals. DOE has put some examples of program successes on its Web site. For example, a small town in southwestern Wyoming used its EECBG funds to convert its streetlights to LED fixtures. Town officials reported better lighting quality and visibility, less light pollution, and lower energy use that has reduced lighting-related energy costs by almost two-thirds. In North Carolina, local officials used EECBG funds to convert an abandoned grocery store into an energy-efficient community training center and classroom. By installing a more energy- efficient roof, new insulation, and HVAC systems, local officials said they anticipate achieving substantial energy savings. Another recipient in Oklahoma used EECBG funds to purchase five wind turbines. According to the recipient, the new wind energy technology has offset the electrical costs for all town-owned buildings, and the recipient reported anticipating saving $24,000 annually. However, DOE officials have experienced challenges in assessing whether the EECBG program is meeting Recovery Act and program goals for energy savings because most recipients do not measure energy savings by collecting actual data and several factors affect the reasonableness of energy-savings estimates. Additionally, while DOE officials say they have anecdotal examples of program successes, DOE lacks actual programwide data on energy savings. DOE guidance requires that recipients report impact metrics—which include energy savings, energy cost savings, renewable-energy generation, and emissions reductions—on a quarterly basis and verify cumulative totals when grants are closed out, but it does not require that these impact metrics be based on actual, as opposed to estimated, data. Furthermore, according to some DOE officials, there have been only a few opportunities for recipients to collect actual energy- savings data because in most cases actual data are only available after a project has been completed, and recipients are just beginning to complete projects. These officials said that instead of collecting actual energy- savings data, most recipients report estimates to comply with program reporting requirements. As part of the quarterly review process, DOE’s monitoring plan appendix requires project officers to assess whether recipients’ estimates of impact metrics, including energy savings, are reasonable and to determine whether grant recipients have an adequate procedure in place to collect, verify, and report these data. Project officers are further instructed, as part of this quarterly review process, to review recipients’ reported impact metrics and determine whether they are within a range of values that DOE would expect. For example, DOE’s EECBG desktop review guidance instructs project officers to consider rejecting recipients’ quarterly reports in cases where the estimates entered by recipients appear too low or too high. According to DOE officials, several factors affect the extent to which estimates are reasonable. One factor is the variance in the type and robustness of methodologies that recipients use to develop estimates. DOE guidance allows recipients flexibility in how they estimate impact metrics, such as energy savings. For example, DOE’s Program Notice 10- 07B requires that recipients “take care to account for other determinant factors (e.g., weather variation)” when recipients develop estimates. DOE officials said that they prefer that recipients rely on certain estimation methods and tools that DOE would expect to produce more accurate and sophisticated estimates of grant project results—such as contractor- or engineering-supplied estimates of project savings or estimates calculated with the Environmental Protection Agency’s Portfolio Manager tool. For recipients that are not able to use methods or tools that are specific to their grant projects, DOE has provided recipients with tools that DOE officials believe are capable of calculating “high-level” estimates of energy- related impact metrics. Since the beginning of the EECBG program in 2009, DOE has updated and refined its estimating tool and underlying assumptions several times and may update the tool and assumptions in the future as well. While DOE officials said they believe that estimates calculated with the current version of this tool are more accurate than those calculated with previous versions, DOE does not require that recipients who use its estimating tool to use the most updated version when calculating and reporting estimates. Consequently, some recipients may use DOE’s earlier, less refined tool to develop estimates of energy- related impact metrics. Another factor that affects DOE officials’ ability to assess reasonableness is the fact that the agency recommends but does not require, that recipients report the methods or tools used to calculate estimates, so DOE officials do not know which recipients are using older versions of DOE’s estimation tool or other methods to estimate energy-related impact metrics. Knowing which method or version of a tool recipients used to calculate estimates may be more important for smaller grant recipients, as DOE officials told us they believe that smaller grant recipients who do not have the expertise at hand or resources to hire energy-efficiency experts are more likely to use a DOE-supplied estimation tool. Without knowing the methods being used by recipients to estimate energy-related impacts, DOE cannot identify instances where the method along with the associated assumptions being used in calculating estimates may need to be more carefully reviewed. A third factor that DOE officials said can affect the development and reporting of reasonable estimates is the level of expertise available to recipients to develop impact metrics, as this can vary. According to DOE officials, some EECBG recipients are receiving federal grants for the very first time through the EECBG program, which was implemented in March 2009. Some communities may have limited (if any) direct experience with federal grant program requirements, and likewise have limited experience in measuring and reporting impact metrics. As a result, DOE officials reported that some recipients have had difficulty developing their estimates. In addition, DOE officials said that some recipients may also have made errors in reporting their estimates of energy-related impact metrics. For example, DOE officials said that some recipients may be incorrectly aggregating impact metrics from multiple project sites and, as a result, producing errors in the estimated energy-related impacts. In December 2010, DOE issued guidance to recipients outlining the steps DOE and recipients must take to formally close out EECBG grants. As part of this process, project officers are required to review recipients’ final quarterly performance report and federal financial report for completeness and reasonableness. However, this guidance does not specify how project officers should assess reasonableness. DOE officials said that the agency is in the process of drafting internal closeout guidance for its project officers that will outline the procedures the project officers must follow to formally close out a grant. According to DOE officials, the upcoming internal closeout guidance intends to recommend, but not require, that recipients confirm they are reporting the most accurate data available at the time of closeout. DOE officials told us that once a grant is closed out, the agency does not require and cannot legally obligate the recipients to capture additional energy-savings data. For the few grants that have been closed, DOE expects its project officers to rely on their own expertise to assess the reasonableness of the estimates, according to DOE officials. For example, one project officer told us that as a project is completed, he determines whether the reported energy savings are reasonable by comparing the estimate when the project is completed to the recipient’s original estimates, saying that he expects the estimates to be comparable unless the recipient has changed the scope of the project or received updated information from vendors or engineers. Even if the scope of the project has not changed or no new information has been provided, differences may be observed if recipients used an earlier version of the DOE tool to prepare an initial estimate and then used the updated DOE tool to compute the energy savings at the end of the project. Given that DOE may not know what method the recipient used to estimate energy savings, project officers may not be able to determine the level of review necessary to ensure reasonable reporting. Additionally, some of the project officers noted that they do not have the technical expertise to independently verify energy-savings estimates. To determine overall program outcomes, DOE officials told us they plan to conduct a program-wide evaluation measurement and verification study after the end of the EECBG program, which will be designed to measure and report the program’s energy savings and cost savings. However, this effort is still in the design phase. As part of this study, the officials said that they would need to capture more than a year’s worth of data to account for weather and seasonal variations that impact energy needs. While DOE collects some information regarding expected energy savings, officials noted that they were uncomfortable reporting these numbers before completing the study, which may be as long as 2 to 3 years after the projects are completed. To meet our mandate to comment on recipient reports, we have continued monitoring data that recipients reported for Recovery.gov, including data on jobs funded. This time we focused our review on the EECBG recipient data in addition to the national data. Analyzing these data can help in improving the accuracy and completeness of the Recovery.gov data and in planning analyses of recipient reports. Overall, this round’s results were similar to those we observed in previous rounds. According to Recovery.gov as of January 30, 2011, recipients reported on over 209,400 awards across multiple programs indicating that the Recovery Act funded approximately 585,654 jobs during the quarter beginning October 1, 2010, and ending December 31, 2010. This included 2,051 prime reports associated with EECBG recipients. As reported by the Recovery Accountability and Transparency Board, job calculations are based on the number of hours worked in a quarter and funded under the Recovery Act—expressed in FTEs. Using the sixth reporting period data, we continued our monitoring of errors or potential problems by repeating the analyses and edit checks reported in our previous reports. We reviewed 71,643 prime recipient report records from all programs posted on Recovery.gov for this sixth round. This was, for the first time, a decrease of 6,068 prime recipient reports or about an 8 percent drop from round five. The size of this decline in reporting was somewhat mitigated by the number of prime recipients reporting for the first time in round six. In round five, 7,465 recipients identified that round as their final report and did not report in round six. This was more than three times the number of prime recipients reporting for the first time in round six and suggests that further decreases in the number of recipients reporting in the next quarter are likely. For our analyses, in addition to this sixth round of recipient report data, we also used all the previous rounds of data as posted on Recovery.gov as of February 2, 2011. In examining recipient reports, we continued to look for progress in addressing limitations we noted in our prior reports. In those prior rounds, we reviewed data logic and consistency and reviewed unusual or atypical data. Data logic and consistency provide information on whether the data are believable, given program guidelines and objectives; unusual or atypical data values indicate potential inaccuracies. As with previous quarterly report rounds, these reviews included (1) the ability to link reports for the same project across quarters and (2) concerns in the data logic and consistency, such as reports marked final that show a significant portion of the award amount not spent. We continued to see minor variations in the number or percent of reports appearing atypical or showing some form of data discrepancy. For example, we continued to find a small number of prime recipient reports for which there were potential linkage issues across quarters. For this latest round, there was a slight increase from 1.5 percent to 2.2 percent in the number of prime reports appearing across all quarters showing a skip in reporting for one or more quarters. This may impact the ability to track project funding and FTEs over quarters. The number of reports marked “final” for which there appeared to be some discrepancy, such as reports marked “final” but for which project status was marked as less than 50 percent completed, continued to be quite small and unchanged from the previous round. We continued to examine the recipient reports’ agency review flag field as part of our examination of data logic and consistency, since we have noted inconsistencies between agencies’ accounts of their review process and the data shown in that field. Prime recipient report records include a review flag indicating whether or not a federal agency reviewed the record during the data quality review time frames. Prior analyses suggested that, for some agencies, the data in this field might not correctly reflect the extent of their review process. However, this did not seem to be the case for the EECBG program. EECBG program data in this field in this sixth round showed that 93 percent of the prime recipient reports were marked as reviewed by DOE, which was generally consistent with accounts of agency officials about their review process. However, we continue to observe some inconsistency when another data field on recipient reports, which shows whether or not a correction was initiated, is considered in conjunction with agency and recipient review flags. A correction could be initiated by either the prime recipient or the reviewing agency. Logically, one might expect that if a correction was made, it would have been initiated by a reviewer, and therefore the review flag should also be set to “yes.” In this sixth round, as in the prior round, 10 percent of all prime recipient reports for all programs had this correction flag set to “yes” even though the review flags indicated that neither the agency nor prime recipient had reviewed those reports. As part of our focus on EECBG recipient reports for this sixth round of reporting, we examined reported FTE data since they can provide insight into the use and impact of the Recovery Act funds. Recipient reports of FTEs, however, cover only direct jobs funded by the Recovery Act. They do not include the employment impact on suppliers (indirect jobs) or on the local community (induced jobs). Our analyses of EECBG reporting for the last five quarters showed a steady increase in the number of FTEs reported in each quarterly reporting period. As shown in figure 1, there was also a similar steady increase in the percent of EECBG recipients reporting funding at least a partial FTE with Recovery Act funds. For further discussion of FTE data limitations, see GAO, Recovery Act: Recipient Reported Jobs Data Provide Some Insight into Use of Recovery Act Funding, but Data Quality and Reporting Issues Need Attention, GAO-10-223 (Washington, D.C.: Nov. 19, 2009), 6-9. gathered from DOE officials in headquarters and DOE project officers in the field, continues to be addressed by DOE officials at all levels. As we noted in September, some confusion may have existed about the acceptability and use of some methods for calculating FTEs over the course of the reporting periods. This decision is also based on some irregularities and inconsistencies we observed in our analyses of the FTE data across quarters and the relationship of the hours worked, as reported to DOE by recipients, with the FTE values the recipients directly reported to FederalReporting.gov. DOE officials indicated that they continue to assess compliance with and encourage recipients to follow the DOE and Office of Management and Budget (OMB) guidance on how to correctly report FTEs. Moving forward, as these issues in reporting methods are addressed, it is likely that the comparability and reliability of the figures may improve. Each quarter, DOE performs quality assurance steps on the data that recipients provide to FederalReporting.gov, including checks that are performed centrally across all their Recovery Act programs and review done by EECBG project officers at the program level. Based on these reviews, DOE officials reported that most recipients of Recovery Act have reported to FederalReporting.gov in previous rounds and now understand the reporting process, resulting in the reporting proceeding more smoothly. As in previous rounds, DOE performed several checks of the data centr as information became available. For example, officials compared the amount recipients reported as funds awarded with agency internal records. They also compared jobs data from DOE’s PAGE reporting system with FTEs reported to FederalReporting.gov. When discrepancies were found, project officers were instructed to contact recipients to make the necessary corrections. DOE followed up with grant recipients who did not report to FederalReporting.gov. For the sixth round, DOE reported 36 recipients to OMB as not in compliance. Of these, 34 are EECBG grant recipients. Several are tribal recipients that a reporting online is particularly challenging. re in remote locations where EECBG project officers’ efforts also helped ensure the quality of information recipients reported to FederalReporting.gov. For example, one group of project officers we interviewed reported spending a large portion of time helping recipients complete reporting requirements and ensuring the quality of reports. Project officers cited helping recipients understand terminology, such as distinguishing between vendors and recipients of subawards. They reported taking steps, including following up when large increases in job numbers were reported, reports were missing, a recipient in a remote location had difficulty submitting reports, or recipients had questions about definitions. DOE officials notified both recipients and reviewers, such as project officers, of the need to ensure that narrative descriptions met requirements laid out in OMB’s September 2010 guidance. On September 29, 2010—a few days after OMB’s guidance was released but before recipients started reporting for the quarter—DOE e-mailed both recipients and project officers instructions related to the guidance. The e-mail to recipients informed recipients of the need to provide sufficiently clear descriptions to facilitate the public’s understanding, and stated that overly general or unclear award descriptions could be considered material omissions. Similarly, the e-mail to reviewers restated the guidance. It instructed reviewers to make sure they read the descriptions in the narrative data fields, and provide a comment to the recipient if they believed clarification was required. DOE also included this information in its webinars on recipient reporting designed for grant recipients and contractors. Further, it included a step in the reviewers’ checklist to determine if the narrative descriptions provided clear and complete information on the award’s purpose, scope, and activities. DOE officials also reported that during the last three quarters’ reviews they have focused on ensuring that reports marked “final” correctly reflect that status. They have reached out to educate recipients on what that designation means and to ensure that those marked “final” are correctly identified. This includes looking at the amount reported as spent. DOE’s quality assurance process flags reports in which it appears the designation may not be correct based on financial analyses, and encourages recipients to make needed corrections during the continuous corrections process. The Recovery Act pledges unprecedented transparency and accountability in its use of funds. In light of this pledge, the ability of the EECBG program to ensure compliance with applicable laws, including the Recovery Act and program requirements is critical, and will help determine the extent to which the program is meeting Recovery Act and program goals. DOE and recipients are taking steps to monitor the use of funds to help ensure that Recovery Act and program requirements are met, but DOE assesses recipients’ monitoring practices only in a limited number of cases. Because of this limited assessment, DOE is not always able to identify when recipients’ monitoring practices are sufficient to ensure compliance with applicable federal requirements. If DOE is not aware of recipients’ monitoring practices, it cannot ensure that they have effective monitoring practices in place. In addition to ensuring that Recovery Act and program requirements are met, DOE must also be able to determine the extent to which the EECBG program is meeting Recovery Act and program goals for energy-related outcomes, such as energy savings. Because actual energy-savings data are often unavailable, DOE must rely on estimates. DOE takes some steps to assess the reasonableness of energy-related estimates, but without knowing which methodology or tool recipients used, it is difficult to do such an assessment. For example, without knowing if recipients who used DOE’s estimating tool—which has been revised in the past and may be revised again in the future—were using the best available information for calculating metrics in the most recent version, project officers cannot be sure that recipients used sound estimating methods. Without more information regarding the recipients’ estimating methods, DOE’s assessment of the reasonableness of these estimates may not be sufficient to support the defensible development of programwide estimates of energy-related impacts, and therefore, the assessment of progress toward program goals. To better ensure that EECBG funds are used to meet Recovery Act and program goals, we are recommending that the Secretary of Energy take the following two actions: Explore a means to capture information on the monitoring processes of all recipients to make certain that recipients have effective monitoring practices. Solicit information from recipients regarding the methodology they used to calculate their energy-related impact metrics and verify that recipients who use DOE’s estimation tool use the most recent version when calculating these metrics. We provided a draft of this report to the Department of Energy for review and comment. DOE’s comments are reproduced in appendix II. DOE agreed with GAO’s recommendations, stating that “implementing the report’s recommendations will help ensure that the Program continues to be well managed and executed.” DOE also provided additional information on steps it has initiated or planned to implement. In particular, with respect to our first recommendation, DOE elaborated on additional monitoring practices it performs over high dollar value grant recipients, such as its reliance on audit results obtained in accordance with the Single Audit Act and its update to the EECBG program requirements in the Compliance Supplement to OMB Circular No. A-133. However, these monitoring practices only focus on larger grant recipients, and we believe that the program could be more effectively monitored if DOE captured information on the monitoring practices of all recipients. We are sending copies of this report to appropriate congressional committees, the Secretary of Energy, the Director of the Office of Management and Budget, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Mark E. Gaffigan at (202) 512-3841 or [email protected], or Yvonne D. Jones at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We took a number of steps to address our objectives, which were to determine (1) how Energy Efficiency and Conservation Block Grant (EECBG) funds are being used, and what challenges, if any, do EECBG recipients face in obligating and spending their funds; (2) actions Department of Energy (DOE) officials and EECBG recipients are taking to provide oversight of EECBG funds and challenges, if any, they face in meeting Recovery Act and other requirements; (3) the extent to which EECBG program recipients and the EECBG program are meeting Recovery Act and EECBG program goals for energy savings, and what challenges, if any, have recipients encountered in measuring and reporting energy savings; and (4) how the quality of estimates of jobs created and retained reported by Recovery Act recipients, particularly EECBG recipients, has changed over time. To address these objectives, we reviewed and analyzed relevant federal laws and regulations, as well as federal agency guidance related to program goals, use of funds, monitoring, and reporting outcomes. We interviewed agency officials and about 30 staff members in DOE field offices that have some role in managing and monitoring awards including project officers, technical monitors, contract specialists and contractors, and compliance officers, as well as those that provide direct support including attorneys and accountants, to discuss the roles and responsibilities for managing awards, project management guidance and communication, activities undertaken and challenges faced in providing support to recipients with obligating and spending funds, monitoring, and reporting outcomes. We also met with officials from the DOE Office of Inspector General to better understand their role in the expanded grant- awarding process and in monitoring recipients. We also interviewed representatives from associations and organizations, including the National Association of Counties (NACo), the National Association of State Energy Officials (NASEO), and the U.S. Conference of Mayors (USCM) to better understand best practices identified and challenges faced by recipients in using funds, monitoring projects, and reporting outcomes. We also reviewed reports prepared by DOE and analyzed data from DOE’s iPortal database on the number of grant awards, amount of obligations and expenditures, and number of users. In addition, we reviewed reports prepared by DOE and analyzed data from DOE’s PAGE database on the number and type of program activities. We assessed the reliability of the program data we used by reviewing DOE documentation and Inspector General reports on the Performance and Accountability for Grants in Energy (PAGE) system; interviewing knowledgeable DOE officials about the quality and potential limitations of the data and what checks and controls were in place to ensure data accuracy; and performing edit checks on iPortal and PAGE data. We determined the data were sufficiently reliable for our purposes. In addition, we developed a set of questions to be administered to a nonprobability sample of 50 grant recipients eligible to receive formula funding. These questions addressed various aspects of our first three objectives, such as obligating and spending funds, guidance, best practices, internal controls, monitoring, working with DOE officials, and challenges faced in implementing projects. We pre-tested and revised these questions and sent them in an e-mail to 50 EECBG grant recipients in cities and counties in the United States. Using October 2010 data from DOE’s iPortal and PAGE information systems, and from data gathered from Recovery.gov, we purposefully identified and selected a sample of city, county, and tribal recipients that included a range of grants by project activity types (e.g., building retrofit, incentive program); award size; state; different DOE project officers and monitors; and different stages of completion. We received responses from 25 of the 50 city and county grant recipients. No tribal communities responded. The responses from recipients in this sample are not generalizable to the 2,185 state, city, county and tribal recipients receiving formula EECBG funds nationwide. To obtain additional information regarding our objective addressing program goals for saving energy and reporting those savings, we selected another nonprobability sample of 41 EECBG grant recipients and sent them a similar e-mail questionnaire. We selected a range of recipients for this sample similar to the sample previously described. In addition, however, these recipients were selected because they had completed many, if not all, projects in their award as of October 2010. We selected these recipients in order to obtain information on best practices, strengths and weaknesses in reporting outcomes and challenges in measuring jobs, and cost and energy savings. The questionnaire sent to this sample of recipients included questions addressing these topics along with a few of the same questions asked of the other recipient sample. We received responses from 24 of the 41 grant recipients in the sample. One tribal grant recipient in our sample responded. The responses from this sample are also not generalizable to the population of EECBG recipients nationwide. In making our selection of grant recipients in both samples, we did not include grant recipients whose grant applications or awards were the subject of data collection efforts for previous GAO or DOE Inspector General reports. For both samples, we sent follow-up e-mails to recipients who did not respond after several weeks to our initial e-mail, encouraging them to complete the questionnaire. We were not able to conduct further follow-up activities to improve the response rate in the limited time remaining for us to complete our data collection field work. We reviewed responses to questions on guidance and experiences in obligating and spending funds, oversight and monitoring efforts, and reporting outcomes as well as best practices and challenges faced in managing and monitoring projects. The recipient reporting section of this report responds to the Recovery Act’s mandate that we comment on the estimates of jobs created or retained by direct recipients of Recovery Act funds. For our review of the sixth submission of recipient reports, covering the period from October 1, 2010, through December 31, 2010, we built on findings from our five prior reviews of the reports, covering the period from February 2009 through September 30, 2010. To understand how the quality of jobs data reported by EECBG grant recipients has changed over time, we compared the six quarters of recipient reporting data that were publicly available at Recovery.gov on February 2, 2011. We performed edit checks and other analyses on EECBG grant recipient reports, which included matching DOE-provided data from iPortal and PAGE information systems on EECBG recipients. As part of that matching process, we also examined the reliability of recipient data contained in these DOE information systems. Our assessment activities included reviewing documentation of system processes, Inspector General reviews of the systems and conducting logic tests for key variables. Our matches showed a high degree of agreement between DOE recipient information and the information reported by recipients directly to FederalReporting.gov. However, the magnitude of the differences or lack of agreement with regard to the full-time equivalents (FTE) are not insignificant. In general, we consider the data used to be sufficiently reliable, with attribution to official sources for the purposes of providing background information and a general sense of the status of EECBG recipient reporting. To update the status of open recommendations from previous bimonthly and recipient reporting reviews, we obtained information from agency officials on actions taken in response to the recommendations. We conducted this performance audit from September 2010 to April 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we update the status of agencies’ efforts to implement the 26 open recommendations, and 5 newly implemented recommendations from our previous bimonthly and recipient reporting reviews. New recommendations and agency responses to those recommendations are included in the program section of this report. Recommendations that were listed as implemented or closed in a prior report are not repeated here. Lastly, we address the status of our Matters for Congressional Consideration. Given the concerns we have raised about whether program requirements were being met, we recommended in May 2010 that the Department of Energy (DOE), in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that clarifies the specific methodology for calculating the average cost per home weatherized to ensure that the maximum average cost limit is applied as intended. accelerates current DOE efforts to develop national standards for weatherization training, certification, and accreditation, which is currently expected to take 2 years to complete. develops a best practice guide for key internal controls that should be present at the local weatherization agency level to ensure compliance with key program requirements. sets time frames for development and implementation of state monitoring programs. revisits the various methodologies used in determining the weatherization work that should be performed based on the consideration of cost- effectiveness and develops standard methodologies that ensure that priority is given to the most cost-effective weatherization work. To validate any methodologies created, this effort should include the development of standards for accurately measuring the long-term energy savings resulting from weatherization work conducted. In addition, given that state and local agencies have felt pressure to meet a large increase in production targets while effectively meeting program requirements and have experienced some confusion over production targets, funding obligations, and associated consequences for not meeting production and funding goals, we recommended that DOE clarify its production targets, funding deadlines, and associated consequences while providing a balanced emphasis on the importance of meeting program requirements. DOE generally concurred with all of our recommendations and has begun to take some actions to implement them. With regard to clarifying the methodology to calculate the average cost per home weatherized, DOE has taken some action but has not yet provided specific guidance to clarify this methodology. In response to our recommendation to develop and clarify guidance on developing national standards for weatherization training, certification, and accreditation, according to DOE officials, DOE is making progress toward advancing such standards. For example, DOE and the Department of Labor released the draft “Workforce Guidelines for Home Energy Upgrades” for single-family homes in November 2010. DOE officials expect to finalize the guidelines by early spring 2011. DOE has taken some steps to address our recommendation that it develop and clarify guidance to generate a best practice guide for key internal controls. According to officials, the Weatherization Assistance Program Technical Assistance Center Web site provides a variety of best practices on program management, administrative procedures, and technical standards. However, while the Web site is a central repository for all relevant resource documents, DOE has not created a dedicated guide on best practices for key internal controls. In response to our recommendation to develop and clarify guidance to set time frames for development and implementation of state monitoring programs, DOE has taken limited action. DOE officials provided current guidance available on state monitoring efforts but did not identify any time frames for development or implementation of state monitoring programs. With regard to our recommendation on developing and clarifying guidance for prioritizing cost-effective weatherization work, DOE has taken some actions. For example, DOE contracted with the Oak Ridge National Laboratory in 2010 to conduct an assessment of aspects of program performance such as costs and benefits for program years 2008 to 2010. The assessment will cover both Recovery Act funds and annual appropriation funds. Preliminary results may be available in late spring 2011. In response to our recommendation that DOE clarify its production targets, funding deadlines, and associated consequences, DOE has taken steps to address this recommendation. According to officials, DOE has communicated directly with recipients about funding, production, and other priorities. For example, the Green Light program is in its fourth round of communications between two DOE offices and recipients. DOE officials cited these calls as assisting in the identification of barriers preventing grantees from increasing production and expenditures. In addition, DOE officials stated that grantees were notified of the requirement to spend all Recovery Act funds by March 31, 2012. However, DOE provided no evidence that it has clarified the consequences. We recommended that DOE, in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that establishes best practices for how income eligibility should be determined and documented and issues specific guidance that does not allow the self- certification of income by applicants to be the sole method of documenting income eligibility. considers and addresses how the weatherization program guidance is impacted by the introduction of increased amounts of multifamily units. DOE agreed with both of our recommendations and has taken action to implement them. In response to our recommendation on issuing guidance and establishing best practices to determine income eligibility, DOE issued guidance—Weatherization Program Notice 10-18, 2010 Poverty Income Guidelines and Definition of Income—on September 20, 2010. In this guidance, DOE clarified the definition of income and strengthened income eligibility requirements. For example, the guidance clarified that self- certification of income would only be allowed after all other avenues of documenting income eligibility are exhausted. Additionally, for individuals to self-certify income, a notarized statement indicating the lack of other proof of income is required. Regarding our recommendation on weatherization program guidance for multifamily units, DOE officials identified several issues that impact the increased number of multifamily buildings to be weatherized and issued several guidance documents addressing multifamily buildings. For example, DOE issued Weatherization Program Notice 11-1, Program Year 2011 Weatherization Grant Guidance in December 2010, which contained two sections related to multifamily units. One section covered the eligibility of multifamily units for the weatherization program and the other section provided guidance on conducting energy audits on multifamily units. In reviewing the recently issued program guidance for both recommendations, we have concluded that DOE addressed the intent of these recommendations. We recommended that the Environmental Protection Agency (EPA) Administrator work with the states to implement specific oversight procedures to monitor and ensure subrecipients’ compliance with the provisions of the Recovery Act-funded Clean Water and Drinking Water State Revolving Fund (SRF) program. In response to our recommendation, EPA provided additional guidance to the states regarding their oversight responsibilities, with an emphasis on enhancing site-specific monitoring and inspections. Specifically, in June 2010, the agency developed and issued an oversight plan outline for Recovery Act projects that provides guidance on the frequency, content, and documentation related to regional reviews of state Recovery Act programs and regional and state reviews of specific Recovery Act projects. For example, EPA’s guidance states that regions and states should be reviewing the items included on the EPA “State ARRA Inspection Checklist” or use a state equivalent that covers the same topics. The plan also describes EPA headquarters’ role in ongoing Recovery Act oversight and plans for additional webcasts. EPA also reiterated that contractors are available to provide training and to assist with file reviews and site inspections. We are undertaking further review of the states’ use of Recovery Act funds for the Clean Water and Drinking Water programs. As part of that work, we will consider EPA’s and the states’ oversight of Recovery Act funds and, more specifically, progress in implementing EPA’s guidance. To facilitate understanding of whether regional decisions regarding waivers of the program’s matching requirement are consistent with Recovery Act grantees’ needs across regions, we recommended that the Director of the Office of Head Start (OHS) should regularly review waivers of the nonfederal matching requirement and associated justifications. OHS has not conducted a review of waivers of the nonfederal matching requirement, but OHS officials stated that the variation is largely due to differences in regions’ policy in timing: some regional offices grant waivers at the same time that the grant is made official, whereas other regions grant waivers later. OHS officials stated that although the OHS central office has not regularly reviewed grantees’ justifications for waiver applications for regional variability in the past, they are looking into tracking this data in their Web-based system consistently across regions. The process of tracking waivers is not yet complete. To oversee the extent to which grantees are meeting the program goal of providing services to children and families and to better track the initiation of services under the Recovery Act, we recommended that the Director of OHS should collect data on the extent to which children and pregnant women actually receive services from Head Start and Early Head Start grantees. The Department of Health and Human Services (HHS) disagreed with our recommendation. OHS officials stated that attendance data are adequately examined in triennial or yearly on-site reviews and in periodic risk management meetings. Because these reviews and meetings do not collect or report data on service provision, we continue to believe that tracking services to children and families is an important measure of the work undertaken by Head Start and Early Head Start service providers. To help ensure that grantees report consistent enrollment figures, we recommended that the Director of OHS should better communicate a consistent definition of “enrollment” to grantees for monthly and yearly reporting and begin verifying grantees’ definition of “enrollment” during triennial reviews. OHS issued informal guidance on its Web site clarifying monthly reporting requirements, but has not clarified yearly reporting requirements. To provide grantees consistent information on how and when they will be expected to obligate and expend federal funds, we recommended that the Director of OHS should clearly communicate its policy to grantees for carrying over or extending the use of Recovery Act funds from one fiscal year into the next. HHS indicated that OHS will issue guidance to grantees on obligation and expenditure requirements, as well as improve efforts to effectively communicate the mechanisms in place for grantees to meet the requirements for obligation and expenditure of funds. To better consider known risks in scoping and staffing required reviews of Recovery Act grantees, we recommended that the Director of OHS should direct OHS regional offices to consistently perform and document Risk Management Meetings and incorporate known risks, including financial management risks, into the process for staffing and conducting reviews. HHS reported OHS is reviewing the risk management process to ensure it is consistently performed and documented in its centralized data system and that it has taken related steps, such as requiring the grant officer to identify known or suspected risks prior to an on-site review. Because the absence of third-party investors reduces the amount of overall scrutiny Tax Credit Assistance Program (TCAP) projects would receive and the Department of Housing and Urban Development (HUD) is currently not aware of how many projects lacked third-party investors, we recommended that HUD should develop a risk-based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. HUD responded to our recommendation by saying it will identify projects that are not funded by the HOME Investment Partnerships Program funds and projects that have a nominal tax credit award. However, HUD said it will not be able to identify these projects until it could access the data needed to perform the analysis, and it does not receive access to those data until after projects have been completed. HUD currently has not taken any action on this recommendation because it only has data on the small percentage of projects completed to date. It is too early in the process to be able to identify projects that lack third-party investors. The agency will take action once they are able to collect the necessary information from the project owners and the state housing finance agencies. To enhance the Department of Labor’s (Labor) ability to manage its Recovery Act and regular Workforce Investment Act (WIA) formula grants and to build on its efforts to improve the accuracy and consistency of financial reporting, we recommended that the Secretary of Labor take the following actions: To determine the extent and nature of reporting inconsistencies across the states and better target technical assistance, conduct a one-time assessment of financial reports that examines whether each state’s reported data on obligations meet Labor’s requirements. To enhance state accountability and to facilitate their progress in making reporting improvements, routinely review states’ reporting on obligations during regular state comprehensive reviews. Labor agreed with both of our recommendations and has begun to take some actions to implement them. To determine the extent of reporting inconsistencies, Labor awarded a contract in September 2010 to perform an assessment of state financial reports to determine if the data reported are accurate and reflect Labor’s guidance on reporting of obligations and expenditures. Labor plans to begin interviewing states in February 2011 and will issue a report after the interviews are completed and analyzed. To enhance states’ accountability and facilitate their progress in making improvements in reporting, Labor has drafted guidance on the definitions of key financial terms such as obligations, which is currently in final clearance. After the guidance is issued, Labor plans to conduct a systemwide webinar on this topic. Our September 2009 bimonthly report identified a need for additional federal guidance in defining green jobs and we made the following recommendation to the Secretary of Labor: To better support state and local efforts to provide youth with employment and training in green jobs, provide additional guidance about the nature of these jobs and the strategies that could be used to prepare youth for careers in green industries. Labor agreed with our recommendation and has begun to take several actions to implement it. Labor’s Bureau of Labor Statistics has developed a definition of green jobs which was finalized and published in the Federal Register on September 21, 2010. In addition, Labor continues to host a Green Jobs Community of Practice, an online virtual community available to all interested parties. As part of this effort, in December 2010, Labor hosted its first Recovery Act Grantee Technical Assistance Institute, which focused on critical success factors for achieving the goals of the grants and sustaining the impact into the future. The department also plans to host a symposium in late Spring 2011 with the green jobs state Labor Market Information Improvement grantees. The symposium will share recent research and other promising practices to inform workforce development and training strategies. In addition, the department anticipates releasing its Internet-based Occupational Information Network (O*NET) Career Profiler tool in the winter of 2011 for those new to the workforce. This tool includes the O*NET green leaf symbol to highlight green occupations. Furthermore, the department’s implementation study of the Recovery Act-funded green jobs training grants is still ongoing. The interim report is expected in late 2011. Our September 2009 bimonthly report identified a need for additional federal guidance in measuring the work readiness of youth and we made the following recommendation to the Secretary of Labor: To enhance the usefulness of data on work readiness outcomes, provide additional guidance on how to measure work readiness of youth, with a goal of improving the comparability and rigor of the measure. Labor agreed with our recommendation and has taken steps to implement it. Labor issued guidance in May and August 2010 to identify requirements for measuring the work readiness of youth and the methodology for implementing the work readiness indicators for the WIA Youth Program. The guidance clarified the changes to the definition of work readiness by requiring a worksite evaluation conducted by the employer. To leverage Single Audits as an effective oversight tool for Recovery Act programs, we recommended that the Director of the Office of Management and Budget (OMB) 1. provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance; 2. take additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond; 3. evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act; 4. issue Single Audit guidance in a timely manner so that auditors can efficiently plan their audit work; 5. issue the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year; 6. explore alternatives to help ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner; and 7. shorten the timeframes required for issuing management decisions by federal agencies to grant recipients. (1) To provide more direct focus on Recovery Act programs to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance through the Single Audit, OMB updated its single audit guidance in the OMB Circular A-133, Audits of States, Local Government, and Non-Profit Organizations Compliance Supplement in July 2010. This compliance supplement requires auditors to consider all federal programs with expenditures of Recovery Act awards to be considered programs with higher risks when performing standard risk- based tests to select programs to be audited. The compliance supplement also clarified information to assist auditors in determining the appropriate risk levels for programs with Recovery Act expenditures. OMB officials have stated that they are in the process of completing the 2011 Compliance Supplement which they expected to issue by March 31, 2011. As of April 4, 2011, the 2011 Compliance Supplement had not yet been issued. They also stated that this compliance supplement will continue to provide guidance that addresses some of the higher risks inherent in Recovery Act programs. The most significant of these risks are associated with newer programs that may not yet have the internal controls and accounting systems in place to help ensure that funds are distributed and used in accordance with program regulations and objectives. Since Recovery Act spending is projected to continue through 2016, we believe that it is essential that OMB provide direction in Single Audit guidance so that some smaller programs with higher risk would not be automatically excluded from receiving audit coverage based upon the requirements in the Single Audit Act. In recent discussions with OMB officials, we communicated our concern that future Single Audit guidance provide instruction that helps to ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. OMB officials agreed and stated that they plan to continue including similar language in the Compliance Supplement and performing outreach training throughout the duration of the Recovery Act. (2) To address the recommendation for taking additional efforts to encourage more timely reporting on internal controls for Recovery Act programs for 2010 and beyond, OMB commenced a second voluntary Single Audit Internal Control Project (project) in August 2010 for states that received Recovery Act funds in fiscal year 2010. Similar to the prior project (which did not get started until October 2009), one of the project’s goals is to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action can be taken more quickly. Specifically, the project encourages participating auditors of states that received Recovery Act funds to identify and communicate deficiencies in internal control to management 3 months sooner than the 9-month time frame currently required under OMB Circular No. A-133. The project also requires that management provide, 2 months earlier than required under statute, plans for correcting internal control deficiencies to the cognizant agency for audit for immediate distribution to the appropriate federal agencies. The federal agency is then to have provided its concerns relating to management’s plan of corrective actions in a written decision as promptly as possible and no later than 90 days after the corrective action plan is received by the cognizant agency for audit. According to OMB officials, 14 states volunteered to participate in the second project. Each participating state was to select a minimum of four Recovery Act programs for inclusion in the project. We assessed the results of the first OMB Single Audit Internal Control Project for fiscal year 2009 and found that it was helpful in communicating internal control deficiencies earlier than required under statute. We reported that 16 states participated in the first project and that the states selected at least two Recovery programs for the project. We also reported that the project’s dependence on voluntary participation limited its scope and coverage and that voluntary participation may also bias the project’s results by excluding from analysis states or auditors with practices that cannot accommodate the project’s requirement for early reporting of control deficiencies. Overall, we concluded that although the project’s coverage could have been more comprehensive, the analysis of the project’s results provided meaningful information to OMB for better oversight of the Recovery Act programs selected and information for making future improvements to the Single Audit guidance. OMB’s second Single Audit Internal Control Project is in progress and its planned completion date is June 2011. OMB plans to assess the project’s results after its completion date. As of February 9, 2011, OMB officials have stated that the 14 participating states have met the milestones for submitting interim internal control reports by December 31, 2010 and their corrective action plans by January 31, 2011. We believe that OMB needs to continue taking steps to encourage timelier reporting on internal controls through Single Audits for Recovery Act programs. (3) OMB officials have stated that they are aware of the increase in workload for state auditors who perform Single Audits due to the additional funding to Recovery Act programs and corresponding increases in programs being subject to audit requirements. OMB officials stated that they solicited suggestions from state auditors to gain further insights to develop measures for providing audit relief. However, OMB has not yet identified viable alternatives that would provide relief to all state auditors that conduct Single Audits. For state auditors that are participating in the second OMB Single Audit Internal Control Project, OMB has provided some audit relief in that they have modified the requirements under Circular No. A-133 to reduce the number of low-risk programs that are to be included in some project participants’ risk assessment requirements. As expenditures of Recovery Act funds are expected to continue through 2016, it is important that OMB look for opportunities and implement various options for providing audit relief in future years. (4) (5) With regard to issuing Single Audit Guidance in a timely manner, and specifically the OMB Circular A-133 Compliance Supplement, we previously reported in December 2010 that OMB officials stated that they intended to issue the 2011 Compliance Supplement by March 31, 2011. In January 2011, OMB officials reported that the production of the 2011 Compliance Supplement was on schedule for issuance by March 31, 2011. As of April 4, 2011, the 2011 Compliance Supplement had not yet been issued, and we will continue to monitor OMB’s progress to achieve this objective. (6) (7) In October 2010, OMB officials stated that, based on their assessment of the results of the project, they have discussed alternatives for helping to ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner, including possibly shortening the time frames required for federal agencies to provide their management decisions to grant recipients. However, OMB officials have yet to decide on the course of action that they will pursue to implement our related recommendations. OMB officials acknowledged that the results of the 2009 OMB Single Audit Internal Control Project confirmed that this issue continues to be a challenge. They stated that they have met individually with several federal awarding agencies that were late in providing their management decisions in the 2009 project to discuss the measures that the agencies will take to improve the timeliness of their management decisions. To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommended that the Secretary of Transportation direct the Federal Highway Administration (FHWA) to take the following two actions: 1. Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. 2. Make publicly available—within 60 days after the September 30, 2010, obligation deadline—an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. As of the time of this report, the Department of Transportation (DOT) was in the process of developing its plans in response to these recommendations. To better understand the impact of Recovery Act investments in transportation, we believe that the Secretary of Transportation should ensure that the results of these projects are assessed and a determination made about whether these investments produced long-term benefits. Specifically, in the near term, we recommended that the Secretary direct FHWA and the Federal Transit Administration (FTA) to determine the types of data and performance measures they would need to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority were needed. DOT plans to take some steps to assess its data needs, but it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. We recommended that the Secretary of Transportation should gather timely information on the progress they are making in meeting the maintenance-of-effort requirement and to report preliminary information to Congress within 60 days of the certified period (September 30, 2010), (1) on whether states met required program expenditures as outlined in their maintenance-of-effort certifications, (2) the reasons that states did not meet these certified levels, if applicable, and (3) lessons learned from the process. On January 27, 2011, the Secretary of Transportation sent a report to Congress that addressed each reporting element we recommended. DOT reported that 29 states and the District of Columbia met their planned level of expenditure and 21 states did not. It also summarized reasons states did not meet the certified levels, such as a reduction in dedicated revenues for transportation or a state legislature approving a lower-than-expected level of transportation funding in the state budget. Finally, DOT’s report provided its perspectives on lessons learned from the process, including identifying barriers to effectively implementing the maintenance-of-effort requirement. For example, it noted that the lack of clarity around statutory definitions regarding what constituted “state funding” and the substantial decreases in state dedicated transportation revenues were barriers to states producing an accurate certification and meeting the certified level. The Department of the Treasury (Treasury) should expeditiously provide Housing Finance Agencies (HFA) with guidance on monitoring project spending and develop plans for dealing with the possibility that projects could miss the spending deadline and face further project interruptions. Treasury officials told us that after they provided additional guidance, every state HFA and the respective property owners complied with the 30 percent spending rule by the end of calendar year 2010. We concluded that Treasury and the state HFAs have addressed the intent of this recommendation. To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. We continue to believe that Congress should consider changes related to the Single Audit process. To the extent that additional coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. We continue to believe that Congress should consider changes related to the Single Audit process. To provide HFAs with greater tools for enforcing program compliance, in the event the Section 1602 Program is extended for another year, Congress may want to consider directing Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. We continue to believe that Congress should consider directing Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. In addition to the contacts above, Joshua Akery, Thomas Beall, Andrew Ching, Holly Dye, Kim Gianopoulos, Sharon Hogan, Thomas James, Jonathan Kucskar, Kristen Massey, Karine McClosky, Alison O’Neill, Carol Patey, Brenda Rabinowitz, Beverly Ross, Kelly Rubin, Ben Shouse, Jonathan Stehle, Kiki Theodoropoulos, Nick Weeks, and Ethan Wozniak made key contributions to this report. The following is a list of 10 related products published since the last mandated GAO report on the Recovery Act. (GAO-11-166, December 15, 2010). For a full list of products related to the Recovery Act, see http://gao.gov/recovery/related-products/. Medicaid: Improving Responsiveness of Federal Assistance to States during Economic Downturns. GAO-11-395. Washington, D.C.: March 31, 2011. State and Local Governments: Knowledge of Past Recessions Can Inform Future Federal Fiscal Assistance. GAO-11-401. Washington, D.C.: March 31, 2011. Recovery Act: Status of Department of Energy’s Obligations and Spending. GAO-11-483T. Washington, D.C.: March 17, 2011. Recovery Act: Broadband Programs Awards and Risks to Oversight. GAO-11-371T. Washington, D.C.: February 10, 2011. Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. Rail Transit: FTA Programs Are Helping Address Transit Agencies’ Safety Challenges, but Improved Performance Goals and Measures Could Better Focus Efforts. GAO-11-199. Washington, D.C.: January 31, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Summary of GAO’s Performance and Financial Information Fiscal Year 2010. GAO-11-3SP. Washington, D.C.: January 24, 2011. Child Support Enforcement: Departures from Long-term Trends in Sources of Collections and Caseloads Reflect Recent Economic Conditions. GAO-11-196. Washington, D.C.: January 14, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $3.2 billion for the Department of Energy's (DOE) Energy Efficiency and Conservation Block Grant Program (EECBG) to develop and manage projects to improve energy efficiency and reduce energy use and fossil fuel emissions. The Recovery Act requires GAO to review funds made available under the act and to comment on recipients' estimates of jobs created or retained. GAO examined (1) how EECBG recipients used EECBG funds and challenges they faced, if any; (2) DOE and recipients' oversight and monitoring activities and challenges, if any; (3) the extent to which the EECBG program is meeting Recovery Act and program goals for energy savings; and (4) the quality of jobs data reported by Recovery Act recipients, particularly EECBG recipients. GAO also updates the status of open recommendations from previous bimonthly and recipient reporting reviews. GAO analyzed DOE recipient data and interviewed DOE officials and a nonprobability sample of EECBG recipients, among other things. According to DOE data, EECBG recipients primarily used funds for 3 of the 14 activities eligible for EECBG funding. These activities are energy-efficiency retrofits, financial incentive programs, and buildings and facilities projects. Some DOE officials, recipients, and others identified challenges in obligating and spending funds due to local jurisdictional requirements and staff and resource limitations. In addition, in April 2010 DOE determined that many recipients were not on a trajectory to obligate and spend funds within specified time frames, so DOE issued new milestones for obligating and spending funds. Many recipients reported having had difficulty meeting the new milestones. DOE is taking steps to address these difficulties. According to DOE officials and documentation, DOE follows a programwide monitoring plan to oversee the use of Recovery Act funds and uses a variety of techniques to monitor recipients. Overall, recipients also use various methods to monitor contractors and subrecipients, but DOE does not always collect information on recipients' monitoring activities. As a result, DOE does not always know whether the monitoring activities of recipients are sufficiently rigorous to ensure compliance with federal requirements. Some DOE officials, recipients, and others have reported to GAO that some DOE staff and recipients faced challenges with overseeing the use of funds, including (1) technical challenges with a Web-based reporting application DOE uses as a primary oversight tool and (2) staffing and expertise limitations, such as some recipients' unfamiliarity with federal grant procedures. Recipients contacted and some DOE officials reported to GAO that recipients are using EECBG funds to develop projects designed to reduce energy use and increase energy savings in line with Recovery Act and program goals. However, DOE officials have experienced challenges in assessing the extent to which the EECBG program is meeting those goals. Because actual energy savings data are generally available only after a project is completed, DOE officials said that most recipients report estimates to comply with program reporting requirements. DOE takes steps to assess the reasonableness of these estimates but does not require recipients to report the methods or tools used to develop estimates. In addition, while DOE provides recipients with a tool to estimate energy savings, DOE does not require that recipients use the most recent, updated version of its estimating tool. GAO's analysis of the Recovery.gov data that recipients reported, including jobs funded, shows data quality this quarter reflects minor amounts of inconsistencies or illogical data. The portion of EECBG recipients reporting some jobs funded has continued to increase. DOE headquarters and field officials continue to address data quality concerns, including ensuring that recipients and reviewers had the updated Office of Management and Budget guidance on narrative descriptions. However, data across reporting periods may not be comparable because, in earlier periods, some confusion existed about methods for calculating jobs funded. GAO recommends that DOE (1) explore a means to capture information on recipients' monitoring activities, and (2) solicit information on recipients' methods for estimating energy-related impact metrics and verify that recipients use the most recent version of DOE's estimating tool. DOE generally agreed with GAO's recommendations.
In total, 103 schools are currently designated HBCUs. They range in size and scope from 2-year colleges with relatively few programs to 4-year universities offering graduate degrees in several fields and enrolling more than 10,000 students. Although most students who attend are black, about one student in every six is not. Collectively, HBCUs enroll about 16 percent of all black students attending all 2-year and 4-year colleges and universities in the United States. Title IV authorized the Department of Education to bar postsecondary schools with high fiscal year “cohort default rates” from continuing to participate in federal student loan programs. Each year, the Department assesses a school’s eligibility on the basis of its three most recent available cohort default rates. In fiscal year 1998, eligibility is based on default rates for fiscal years 1993, 1994, and 1995. A school remains eligible if its cohort default rate is below the statutory threshold, currently 25 percent, in at least 1 of the latest 3 consecutive fiscal years. A school becomes ineligible if its default rate equals or exceeds the default threshold in all 3 fiscal years. The Higher Education Act exempts HBCUs from this threshold requirement through June 1998. In addition to the cohort default rate threshold specified in the Higher Education Act, the Department has established—through regulation—a provision that allows it to start procedures to limit, suspend, or terminate a school’s participation in all title IV federal student aid programs if the school’s cohort default rate for a single year exceeds 40 percent. The exemption from the statutory threshold for HBCUs does not extend to this provision. Students get federal loans from two major programs: the Federal Family Education Loan Program (FFELP) and William D. Ford Federal Direct Loan Program (FDLP). Loans made under FFELP are provided by private lenders and are ultimately guaranteed against default by the federal government. Loans made under FDLP are provided through schools, and the Department services and collects loans through contractors. FDLP was originally authorized by the Higher Education Amendments of 1992. Since the first loans under FDLP were made in the fourth quarter of fiscal year 1994, the fiscal year 1995 cohort is the first cohort affected by FDLP defaults. Students at HBCUs make extensive use of these loan programs. Although HBCU students accounted for 1.9 percent of fall 1995 enrollments at all 2-year and 4-year public and private schools, they were awarded 3.5 percent of the total dollar volume of student loans under FFELP and FDLP in fiscal year 1996 (see table 1). On average, HBCUs have a higher student loan default rate than non-HBCUs, but the difference has narrowed somewhat. Between fiscal years 1993 and 1995, the aggregate student loan cohort default rate for HBCUs declined from 20.6 percent to 18.5 percent while the rate for non-HBCUs increased from 7.4 percent to 7.8 percent (see fig. 1). Before 1987, little research had been published that identified the factors that could predict high student loan default rates. However, in the past 10 years several empirical research studies have analyzed student borrowers from proprietary (private, for profit) schools as well as from public and private 2-year and 4-year colleges and universities. A key theme derived from these studies is that student loan repayment and default behavior are primarily influenced by individual borrower characteristics rather than by the characteristics of the educational institutions they attend. For example, one study concluded that “student characteristics are of overwhelming importance in correctly predicting defaulters, in contrast to the institutions they attend, or the administrative practices those institutions use to try to curb student defaults.” The studies indicate that characteristics playing an important role in determining the level of student default rates are related to students’ academic preparation and socioeconomic background. In general, the higher the students’ academic preparation and more advantaged the socioeconomic background, the lower the likelihood of defaulting on their student loans. Factors indicating good academic preparation, such as staying in college, earning good grades, and advancing to graduation, significantly decrease the probability of default. Two of the most important factors accounting for different college graduation rates among institutions are actually precollege factors: high school grades and college admissions test scores. The higher the grades and test scores, the greater the incidence of college graduation. Similarly, studies have shown that students who come from a more advantaged socioeconomic background—indicated by higher parental education and income levels and two-parent families—have better high school grades and lower student loan default rates. Students attending 4-year HBCUs are less academically prepared and come from a more disadvantaged socioeconomic background than their counterparts attending all 4-year colleges and universities, according to our analysis of an annual survey of the fall 1996 freshman class. For example, relative to freshmen at all 4-year colleges and universities, HBCU students had a lower high school grade average and needed (or already had) more tutoring or remedial work. In addition, the rate at which students starting at the same school graduated within a 6-year period was lower for HBCUs than for non-HBCUs. Relative to parents of freshmen at all 4-year colleges and universities, parents of 4-year HBCU freshmen were more likely to be divorced or separated, less likely to have a college education, and more likely to earn less than $20,000 a year. These findings help explain why HBCU student loan default rates have generally been higher than the rates for non-HBCUs. Freshmen at HBCUs and 2-year schools reported similar high school grades: they were much less likely to earn A’s and much more likely to earn B’s and C’s than freshmen at all 4-year colleges and universities (see fig. 2). For example, the portion of university freshmen with an A average exceeded the portion of HBCU and 2-year freshmen threefold. In contrast, over a quarter of HBCU and 2-year freshmen had a C average compared with 5 percent of university freshmen. In six subject areas (English, reading, mathematics, social studies, science, and foreign languages), an average of 11 percent of HBCU freshmen had special tutoring or remedial work compared with 7 percent of 2-year, 6 percent of 4-year, and 4 percent of university freshmen (see table 2). In addition, HBCU freshmen were nearly twice as likely to require additional tutoring or remedial work as freshmen at all colleges and universities. A comparison of the average 6-year graduation rate—another measure of academic preparedness—showed a 35-percent rate for 4-year HBCUs compared with 54 percent for 4-year non-HBCUs, a difference of 19 percentage points. The typical marital status of parents of HBCU freshmen differed from that of parents of freshmen at all colleges and universities. Nearly half of all HBCU freshmen (49 percent) reported the marital status of their parents as divorced or separated, while the majority of freshmen at all schools reported that their parents were living together (see fig. 3). Fewer parents of HBCU and 2-year college freshmen had attained a college degree than had parents of 4-year college and university freshmen (see fig. 4). About 20 percent of HBCU and 2-year college freshmen reported that their parents had a college degree, compared with 26 percent or more of 4-year college and university freshmen. Compared with parents of all freshmen, parents of HBCU freshmen were more likely to have lower incomes. The portion of freshmen with parents who had incomes of less than $20,000 ranged from 29 percent for HBCU freshmen to 9 percent for university freshmen (see fig. 5). In contrast, only 22 percent of parents of HBCU freshmen had incomes of $60,000 or more, compared with 52 percent for parents of university freshmen. HBCU freshmen were much more likely to receive Pell grant aid than their counterparts at all schools, another indicator of parental income (see fig. 6). Generally, the lower the parents’ income, the greater the amount of financial aid received. Federal Pell grants, available only to undergraduate students, under title IV, are designed to help students who have the greatest financial need. Grants need not be repaid, and the maximum Pell grant award amount for school year 1995-96 was $2,340. Although HBCU students as a group differ substantially from students at all colleges and universities in their academic preparation and socioeconomic backgrounds, our analysis indicates that the same kinds of differences exist among students at individual HBCUs. That is, when HBCUs are divided into groups according to their default rates, the characteristics of students at schools with the highest default rates reflect less academic preparation and more disadvantaged socioeconomic backgrounds. These links between student characteristics and the magnitude of default rates at HBCUs are consistent with the research that has shown certain student characteristics to be linked to student loan defaults. We based our analysis on undergraduate students at 83 4-year HBCUs that conferred bachelor’s degrees during the 1993-94 school year and for which the Department had reported student loan default rate statistics for the 1993-95 cohorts. Collectively, the default rate for these HBCUs averaged 20 percent over the 3-year period. We grouped the 83 HBCUs into low, medium, and high default categories to facilitate comparisons of student characteristics among three groups of HBCUs. For the low group, the average default rate was 11.5 percent compared with 31 percent for the high group. Comparing the academic characteristics of students at HBCUs with low, medium, and high default rates for fiscal years 1993-95 showed that students at high-default-rate HBCUs consistently exhibited a lower level of academic preparation than students at HBCUs with low default rates. For example, the 6-year graduation rate for low-default-rate HBCUs was more than 1.4 times higher (at 41 percent) than the 29-percent rate for high-default-rate HBCUs. A similar trend was found for retention of freshmen at HBCUs. Conversely, freshman acceptance rates, an indicator of a school’s selectiveness in admitting academically qualified applicants, were lower at HBCUs with low default rates (see fig. 7). For this portion of our analysis, Department of Education data covered only students who had received federal student aid during the 1995-96 school year at the 83 4-year HBCUs. Our comparisons showed that students at high-default-rate HBCUs consistently had more disadvantaged socioeconomic characteristics than students at HBCUs with low default rates. For example, parents’ average adjusted gross income for high-default-rate HBCUs was $22,489, about 26 percent lower than the $30,321 average for low-default-rate HBCUs (see fig. 8). Twenty-five percent of the students’ parents at high-default-rate HBCUs had adjusted gross income of $30,000 or more, compared with 36 percent for parents of students at low-default-rate HBCUs. Similarly, the extent to which parents’ highest education level was college or beyond and to which parents were married and not separated were lower for high-default-rate HBCUs compared with low-default-rate HBCUs (see fig. 9). The Carnegie Foundation for the Advancement of Teaching developed a system to classify American colleges and universities primarily based on their academic missions, such as their highest degree offerings, the numbers of degrees conferred, and in some cases the selectivity of a school’s admissions. Although it is an institutional measure, we consider the Carnegie classification to reflect the academic preparation of a school’s students based on the level of the degrees it offers and confers and on the selectivity of the students it admits. To facilitate comparisons among 4-year HBCUs included in our review, we consolidated the HBCUs’ various Carnegie classifications into two categories: “high” and “low” (see app. III). Our analysis showed that low-default-rate HBCUs had a higher Carnegie classification than high-default-rate HBCUs. HBCUs that have high Carnegie classifications confer more doctoral, master’s, or liberal arts degrees or are more restrictive (requiring higher high school grades or college admission test scores) in student admission criteria. There appears to be a strong link between an HBCU with a high Carnegie classification and low student loan default rates. Schools with high Carnegie classifications made up 71 percent of low-default-rate HBCUs compared with only 12 percent of HBCUs with high student loan default rates (see fig. 10). According to Department officials, default reduction measures apply to all schools participating in federal student loan programs and are not specifically aimed at lowering high default rates at HBCUs. These measures were part of the default reduction initiative that the Department introduced in June 1989 in response to the rising default rates in federal student loan programs at that time. These measures, found in statutes, regulations, and guidance, required schools to provide students with loan counseling, take steps to promote repayment among delinquent borrowers, and, for schools whose default rate exceeded certain thresholds, implement a default management plan. Under default-reduction regulations and agency guidance, all schools must perform entrance counseling before releasing loan proceeds to a borrower and exit counseling shortly before a borrower ceases at least half-time study. Entrance counseling is to include exploring all sources of aid, stressing constraints on aid, reviewing requirements for satisfactory academic progress, reminding students to keep lenders informed, reviewing available repayment options, and reviewing the consequences of delinquency and default. Many of the points stressed in entrance counseling are to be reiterated during exit counseling in addition to obtaining such data as the student’s expected permanent address, employer’s name and address, and the address of next of kin. The Department has no prescribed format for financial aid counseling other than its requirement that a person knowledgeable about financial aid programs be available for answering borrowers’ questions after the counseling sessions. Schools are encouraged to use such aids as charts, handouts, videotape, and computer-assisted technology to increase the effectiveness of their counseling sessions. These materials, as well as training for financial aid administrators, are generally available from the Department and some guaranty agencies, lenders, and other postsecondary education organizations. Another default-reduction measure requires guaranty agencies to notify, upon request and within 30 days, a FFELP borrower’s school after the borrower has missed a loan payment due date (the loan has gone into delinquency). This provision for alerting a school of a student’s loan delinquency is intended to give the school an expeditious opportunity to work with the borrower to avert a default. In FDLP, the Department similarly notifies schools of delinquent borrowers. Schools are encouraged to urge borrowers to resume payments to cure the delinquency or provide advice on applying for a deferment or forbearance. To assist schools in tracking their student loan borrowers, the Department developed a software product, the Institutional Default Prevention System, in 1990. Three HBCUs were among the schools that tested the software before its formal release to general users. In using this system, schools can better maintain information about borrowers who have left the school and can print loan reminder letters to send to them. The Department has provided copies of this software to schools participating in both the FFELP and FDLP, including nearly all HBCUs. For many years, if a school’s default rate exceeded 20 percent, the Department required it to implement a default management plan designed to reduce its default rate. The plan typically identified the measures the school was taking, such as revising its admissions policy to enroll more-qualified students, expanding job placement efforts, and conducting additional loan counseling activities. At various times from 1990 through the first half of 1996, the Department required 92 HBCUs to implement a default management plan. Effective July 1996, the Department no longer required high-default-rate schools to submit or implement default management plans because it lacked resources to effectively oversee schools’ adherence to the requirement. However, the Department encourages schools to continue to implement default management plans to help prevent students from defaulting on their student loans. In August 1993, we reported that 33 HBCUs had 1988-90 cohort default rates equal to or greater than the 25-percent statutory threshold. The schools could have become ineligible for continued participation in student loan programs if their default rates had persisted at the rates reported, and the Congress did not extend the HBCUs’ exemption from the statutory threshold. Based on 1993-95 cohort default rates, 8 of the 33 HBCUs still have default rates over the threshold (see table 3) and remain in the program because of the exemption. Nineteen of the 33 HBCUs have subsequently lowered their default rates and no longer exceed the threshold. Six of the 33 HBCUs no longer participate in federal student loan programs. Of these six HBCUs, two merged with other schools, one was annexed by another HBCU, and three lost accreditation and therefore were no longer eligible to participate in federal student loan programs. Of the 19 HBCUs with lower default rates, 9 had rates that were lower than the 25-percent threshold in all three fiscal years, 1993-95. Five HBCUs had rates that were below the threshold for 2 years, and another five had rates below the threshold in 1 year. All 19 HBCUs had lower 3-year default rate averages for fiscal years 1993-95 than for 1988-90. Appendix IV shows the default rate history for these 19 HBCUs. Compared with 33 HBCUs that exceeded the 25-percent threshold for the 1988-90 cohorts, only 14 HBCUs exceeded the threshold for the 1993-95 cohorts. Of these 14 HBCUs, 8 were schools that were included in the 1988-90 cohorts that still remain at risk and 6 had not been at risk in the 1988-90 cohorts. To determine what default reduction measures HBCUs had taken that might have contributed to the decline in the number of HBCUs exceeding the statutory threshold since our 1993 report, we conducted a telephone survey of selected HBCU financial aid administrators. Our survey sample included 17 HBCUs that exceeded the threshold in 1988-90 but were below the threshold in 1993-95 and, for comparison, the 9 HBCUs that had the lowest average 1993-95 default rates. This sample allowed us to obtain the perspectives of administrators at HBCUs that had formerly had high default rates and those that consistently had low default rates. We asked them what measures their schools had taken to reduce or minimize their student loan default rates. Twenty-two administrators responded to our survey, 14 at HBCUs that formerly had high default rates and 8 at HBCUs that consistently had low default rates. They most often cited loan counseling or early intervention with delinquent borrowers as the measures they had taken to address defaults. In meeting the counseling requirement, the administrators described various practices that, in their opinion, made counseling more effective. These included requiring all incoming students, not just borrowers, to attend loan emphasizing personal finance and debt management; and bringing in outside credible experts, such as a lender or guaranty agency representative, to give presentations to students during counseling sessions. In addition, the policy at several of these HBCUs was to direct students, at the time of their enrollment, to other financial aid resources such as grants, scholarships, and work-study programs, so that students could minimize or avoid indebtedness. While these administrators said that they contacted delinquent borrowers as part of their default prevention effort, a minor difference emerged in how they implemented this measure. About half the administrators at HBCUs that previously had high default rates said that their schools had created a default rate manager position or retained a consultant to track and contact delinquent borrowers. But only one of the administrators at an HBCU that consistently had low default rates reported taking similar action. The Department of Education reviewed a draft of this report and had no formal comments, although it provided several technical suggestions that we incorporated as appropriate. Copies of this report will be provided to appropriate congressional committees, the Secretary of Education, and others who are interested. If you have any questions or wish to discuss this material further, please call me or Joseph J. Eglin, Jr., Assistant Director, at (202) 512-7014. Major contributors include Deborah L. Edwards, Daniel C. Jacobsen, Robert B. Miller, Charles M. Novak, Meeta Sharma, and Edward H. Tuchman. Over the past decade, a growing body of research has established that certain measures of students’ academic preparation and socioeconomic status are predictors of how likely students are to default on student loans. In general, research has shown that default rates tend to be higher among students who are not as well prepared academically as others and whose families are not as well off economically. We were asked to address several issues regarding default rates at Historically Black Colleges and Universities (HBCU), including an analysis of these kinds of links. To identify student characteristics that have been shown to predict student loan defaults, we searched for available literature. We selected studies that were published within the past 10 years and that used multivariate analysis to show a link between student characteristics and default rates. Although we did not find many relevant studies that met these criteria, we identified the following four key studies and relied on them: Wellford W. Wilms, Richard W. Moore, and Roger E. Bolus, “Whose Fault Is Default? A Study of the Impact of Student Characteristics and Institutional Practices on Guaranteed Student Loan Default Rates in California,” Educational Evaluation and Policy Analysis, Vol. 9, No. 1 (spring 1987), pp. 41-54. Mark Dynarski, Analysis of Factors Related to Default (Princeton, N.J.: Mathematica Policy Research, Inc., April 1991). Laura Green Knapp and Terry G. Seaks, “An Analysis of the Probability of Default on Federally Guaranteed Student Loans,” The Review of Economics and Statistics, August 1992. J. Fredericks Volkwein and Bruce P. Szeiest, “Individual and Campus Characteristics Associated with Student Loan Default,” Research in Higher Education, Vol. 36, No. 1 (1995). These empirical research studies have collectively analyzed student borrowers from proprietary schools as well as from public and private 2-year and 4-year colleges and universities. These studies showed that default behavior was linked more closely to the characteristics of students rather than schools. We used these and other related studies to identify academic preparation (graduation rates, high school grades, and freshman retention and acceptance rates) and socioeconomic (parental income, level of education, and marital status) characteristics of students that are associated with student loan defaults. Other studies we used include Alexander W. Astin, Lisa Tsui, and Juan Avalos, Degree Attainment Rates at American Colleges and Universities: Effects of Race, Gender and Institutional Type, Higher Education Research Institute, University of California, Los Angeles, September 1996. Alexander W. Astin, The Black Undergraduate: Current Status and Trends in the Characteristics of Freshmen, Higher Education Research Institute, University of California, Los Angeles, July 1990. Shirley L. Mow and Michael T. Nettles, “Minority Student Access to, and Persistence and Performance in, College: A Review of the Trends and Research Literature,” Higher Education: Handbook of Theory and Research, Vol. 6 (New York: Agathon Press, 1990). To determine how these academic preparation and socioeconomic characteristics of students enrolled at HBCUs compared with those of students at all colleges and universities, we reviewed the literature and identified one research study that reported academic and socioeconomic information by type of school. This information (except graduation rates) came from The American Freshman: National Norms for Fall 1996, a longitudinal study consisting of an annual survey published by the Higher Education Research Institute, University of California, Los Angeles. Statistics on remedial work were obtained from the prior year’s survey, which was the most recent data available at the time we performed our analysis. The fall 1996 freshman survey was based on a sample of 251,232 first-time, full-time students at 494 colleges and universities. The respondents were students enrolled full-time who either graduated from high school in the same year as entering college or had no previous college experience. These students were enrolled in the following 494 public and private colleges and universities: 14 4-year HBCUs, 50 2-year colleges that offered associate’s degrees or were known as “terminal vocational” colleges, 363 4-year colleges that offered postbaccalaureate programs but did not award a sufficient number of earned doctoral degrees to be classified as universities, and 67 universities that granted a certain minimal number of earned doctoral degrees. Information on 6-year graduation rates, defined as the percentage of first-time, full-time degree-seeking freshmen who enrolled in fall 1989 and completed their bachelor’s degree by fall 1995 at the same school, primarily came from U.S. News and World Report’s 1996 America’s Best Colleges survey. The 1996 survey was based on the completion of an extensive questionnaire by more than 1,400 accredited 4-year colleges and universities and represented the tenth edition of America’s Best Colleges. For the 83 4-year HBCUs examined, we obtained graduation rates for 82 HBCUs, 77 from U.S. News and 5 from the HBCUs directly (1 HBCU did not provide its graduation rate). Also, for 1,050 4-year non-HBCUs, we obtained U.S. News graduation rates in a computerized summary from the Postsecondary Education Opportunity newsletter. For comparison purposes, we calculated two average graduation rates, one for HBCUs and one for non-HBCUs. To determine the extent to which HBCU undergraduate student characteristics associated with loan defaults differed, we classified 4-year HBCUs as those with low, medium, and high default rates. To define these groupings for the 83 HBCUs, we developed a two-part analysis: we (1) calculated a 3-year average default rate for each institution using its 1993-95 cohort rate and (2) used the 3-year average rate to classify HBCUs as low-default if rates were less than 15 percent, medium-default if rates ranged from equal to or greater than 15 percent to less than 25 percent, and high-default if equal to or greater than 25 percent. We obtained student characteristics data from (1) Department of Education records and reports, such as the National Student Loan Data System, the Free Application for Federal Student Aid database for academic year 1995-96, and the Integrated Postsecondary Education Data System surveys on 1995 fall enrollments and on school year 1995-96 institutional characteristics; (2) HBCUs that we contacted, as needed, to obtain student academic preparation characteristics missing from other data sources; and (3) U.S. News & World Report’s 1996 America’s Best Colleges survey. We obtained information on measures the Department of Education has taken or planned to help HBCUs lower their default rates from federal regulations, Department publications, and interviews with Department officials. We reviewed the Department’s Default Management Report for 1993-95 cohorts to determine the current status of the 33 HBCUs identified in our earlier report as having the potential to lose their eligibility to participate in title IV loan programs because of high default rates. To determine the measures HBCUs were taking to reduce their default rates, we conducted a telephone survey of financial aid administrators at 26 HBCUs. Seventeen of these had been among the 33 HBCUs identified in our earlier report as being at risk of losing their eligibility but had subsequently lowered their default rates below the statutory level for at least 1 of the 3 1993-95 cohort years. Two of these HBCUs did not respond to repeated requests for an interview. The financial aid director at another school was on extended leave and the acting director was reluctant to comment since he had been on campus only a few weeks. We also surveyed administrators at nine HBCUs that had the lowest 3-year average default rates among HBCUs for the 1993-95 cohorts. One of these schools did not respond to repeated requests for an interview. Although we did not validate the reliability of the data derived from the sources indicated, these data are readily available and the education community relies on them. We conducted our review between June 1997 and January 1998 in accordance with generally accepted government auditing standards. From our review of research, we identified and selected academic preparation and socioeconomic student characteristics that have been shown to affect student loan default rates and for which data were available. In cases in which characteristics data were unavailable, we judgmentally selected a related characteristic for which data were readily available as a substitute for the characteristic identified in the research. Thus, depending on data availability, the characteristics used to compare HBCUs and all colleges and universities differed from those used to compare high- and low-default-rate HBCUs (see table II.1). Table II.1: Availability of Data on Characteristics That Research Has Shown to Be Useful Indicators of Student Loan Default Characteristic description and relation to default rates Average grade earned in high school: A, B, or C. High grade linked to lower default rate. A noncredit or reduced credit course in higher education designed to increase the student’s ability to pursue a course of study leading to a certificate or degree. Little to no remedial education linked to lower default rate. Over 4 years (beginning with fall 1991), the average percentage of first-time, full-time degree-seeking students that reenrolled in the fall of their sophomore year. A measure of the student’s ability to stay in college. High retention rate linked to lower default rate. The percentage of first-time, first-year applicants who were accepted for admission in fall 1995. An indicator of a school’s selectiveness in admitting academically qualified applicants. Low acceptance rate linked to lower default rate. The percentage of first-time, degree-seeking (freshmen) students who completed a bachelor’s degree from the same school within 6 years of fall 1989 initial enrollment. Shown to be a culmination of good academic preparation. High graduation rate linked to lower default rate. Married (living together) or not married (single, divorced, separated, or one or both deceased). Married status linked to lower default rate. Highest level of education completed by either parent. Higher level of education linked to lower default rate. For comparing HBCUs to non-HBCUs, parents’ income is the student’s estimate of parents’ 1995 total income, before taxes. For comparing high- to low-default HBCUs, parents’ income is their adjusted gross income as reported on the 1995-96 school year Free Application for Federal Student Aid. Higher income linked to lower default rate. This substitute was used since neither high school grade nor admissions test score data were available to compare high- and low-default-rate HBCUs. The Carnegie Foundation for the Advancement of Teaching has developed a system for classifying, largely based on academic mission, about 3,600 colleges and universities in the United States that are degree-granting institutions and accredited by an agency recognized by the Department of Education. Schools are classified according to their highest level of offering, the number of degrees conferred by discipline, and the amount of federal support for research received by the school, and some categories also rely on the selectivity of the school’s admissions. Since the classifications reflect levels and numbers of degrees conferred as well as admissions restrictions, we consider these classifications to be a substitute for student academic preparation. To facilitate making comparisons among the 4-year HBCUs included in our review, we consolidated the various Carnegie classifications for each HBCU into the following two categories: 1. High Carnegie School Classification. Schools we describe as having higher degree levels or being more admissions restrictive and that had one of the following Carnegie classifications: Research Universities I: giving high priority to research, awarding 50 or more doctoral degrees each year, and receiving annually $40 million or more in federal support. Doctoral Universities I: awarding at least 40 doctoral degrees annually in five or more disciplines. Doctoral Universities II: awarding annually at least 10 doctoral degrees in three or more disciplines or 20 or more doctoral degrees in one or more disciplines. Master’s (Comprehensive) Universities and Colleges I: awarding 40 or more master’s degrees annually in three or more disciplines. Master’s (Comprehensive) Universities and Colleges II: awarding 20 or more master’s degrees annually in one or more disciplines. Baccalaureate (Liberal Arts) Colleges I: awarding 40 percent or more of their baccalaureate degrees in liberal arts fields and being restrictive in admissions. 2. Low Carnegie School Classification. Schools we describe as having fewer liberal arts degrees, being less admissions restrictive, or being specialized and that had one of the following Carnegie classifications: Baccalaureate Colleges II: awarding less than 40 percent of their baccalaureate degrees in liberal arts fields or being less restrictive in admissions. Theological seminaries, Bible colleges, and other institutions offering degrees in religion. Teachers colleges. The following table lists the default rate history for 19 HBCUs whose 1988-90 rates exceeded the statutory threshold of 25 percent and whose rates had fallen below the threshold for the most current (1993-95) cohort years. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed several issues regarding default rates at historically black colleges and universities (HBCU), focusing on: (1) a comparison of freshman students at HBCUs with those at all colleges and universities in terms of the academic and socioeconomic characteristics that have been linked to student loan defaults; (2) differences in socioeconomic characteristics among 4-year HBCUs, for undergraduate students with higher default rates compared with schools with lower default rates; (3) measures the Department of Education has taken or planned to help HBCUs reduce their student loan default rates; (4) the number of HBCUs that are potentially at risk of losing title IV student loan eligibility because of high default rates in 1993-95, and how many of these were potentially at risk in 1988-90; and (5) measures HBCUs have taken to reduce or minimize their student loan default rates. GAO noted that: (1) HBCUs have enrolled a higher percentage of freshmen who, compared with their peers at all institutions, are less prepared academically and come from more disadvantaged socioeconomic backgrounds; (2) the 1995 graduation rate for 4-year HBCUs (35 percent) was substantially below that of non-HBCU students (54 percent); (3) students at HBCUs were twice as likely to come from a home where parents were divorced or separated, and their parents generally had lower education and income levels than parents of students at all colleges and universities; (4) when the analysis is narrowed to only HBCUs, the same pattern is found; (5) in general, HBCUs with lower default rates enrolled students with more academic preparation and higher socioeconomic levels; (6) parents of students receiving federal financial aid at HBCUs with lower default rates generally had higher average adjusted gross incomes and more education and were more likely to be married; (7) the Department of Education employs a number of measures to help schools reduce student loan defaults; (8) these measures apply to all schools, as the Department has no separate or specific default reduction program for HBCUs; (9) the Department's primary efforts were introduced in 1989 as its default reduction initiative and include such activity as supporting schools' efforts to provide financial aid counseling to student borrowers and followup with delinquent borrowers; (10) according to the most recent computations available (for 1993-95), 14 HBCUs were potentially at risk of losing their student loan program eligibility because their default rates remained at or above 25 percent for 3 consecutive years; (11) this is fewer than the 33 HBCUs that GAO reported in August 1993 as potentially at risk on the basis of their 1988-90 default rates; (12) of these 33 HBCUs, 8 remained potentially at risk on the basis of their 1993-95 default rates (6 more subsequently became potentially at risk), 19 were no longer at risk and were eligible to participate in federal student loan programs, and 6 were no longer participating in the program; (13) financial aid administrators at 22 HBCUs GAO surveyed cited default reduction measures promoted by the Department--loan counseling and early intervention with delinquent borrowers--as the default reduction measures they most often used in managing their student loan default rates; and (14) this survey included administrators at 14 of the 33 HBCUs that GAO previously reported could be at risk of losing their student loan eligibility--if they were not subject to the exemption--based on their 1988-90 default rates.
Within USDA, FNS has overall responsibility for overseeing the school- meals programs, which includes promulgating regulations to implement authorizing legislation, setting nationwide eligibility criteria, and issuing guidance. School-meals programs are administered at the state level by a designated state agency that issues policy guidance and other instructions to school districts providing the meals to ensure awareness of federal and state requirements. School districts are responsible for completing application, certification, and verification activities for the school-meals programs, and for providing children with nutritionally balanced meals each school day. The designated state agency conducts periodic reviews of the school districts to determine whether the program requirements are being met. Schools and households that participate in free or reduced-price meal programs may be eligible for additional federal and state benefits. Household income levels determine whether children qualify for free or reduced-price meals. Children from families with incomes at or below 130 percent of the federal poverty level are eligible for free meals; the income threshold for a family of four was $28,665 in the 2010–2011 school year. Those with incomes between 130 percent and 185 percent of the federal poverty level are eligible for reduced-price meals. Income is any money received on a recurring basis—including, but not limited to, gross earnings from work, welfare, child support, alimony, retirement, and disability benefits—unless specifically excluded by statute. In addition, students who are in households receiving benefits under certain public-assistance programs—specifically, SNAP, Temporary Assistance for Needy Families (TANF), or Food Distribution Program on Indian Reservations (FDPIR)—or meet certain approved designations (such as students who are designated as homeless, runaway, or migrant; or who are foster children) are eligible for free school meals regardless of income. In May 2014, we reported that USDA had taken several steps to implement or enhance controls to identify and prevent ineligible beneficiaries from receiving school-meals benefits. USDA worked with Congress to develop legislation to automatically enroll students who receive SNAP benefits for free school meals; SNAP has a more-detailed certification process than the school-meals program. For our May 2014 report, USDA officials told us that they were emphasizing the use of direct certification, because, in their opinion, it helps prevent certification errors without compromising access. Direct certification reduces the administrative burden on SNAP households, as they do not need to submit a separate school- meals application. It also reduces the number of applications school districts must review. The number of school districts directly certifying SNAP-participant children increased from the 2008 through 2013 school years. For example, during the 2008–2009 school year, 78 percent of school districts directly certified students, and by the 2012– 2013 school year, this percentage had grown to 91 percent of school districts, bringing the estimated percentage of SNAP-participant children directly certified for free school meals to 89 percent. USDA was also conducting demonstration projects in selected states and school districts to explore the feasibility of directly certifying children that participate in the Medicaid program. USDA requires state agencies that administer school-meals programs to conduct regular, on-site reviews—referred to as “administrative reviews”—to evaluate school districts that participate in the school- meals programs. Starting in the 2013–2014 school year, USDA increased the frequency with which state agencies complete administrative reviews from every 5 years to every 3 years. As part of this process, state agencies are to conduct on-site reviews of school districts to help ensure that applications are complete and that the correct eligibility determinations were made based on applicant information. School districts that have adverse findings in their administrative reviews are to submit a corrective-action plan to the state agency, and the state agency is to follow up to determine whether the issue has been resolved. In February 2012, USDA distributed guidance to state administrators to clarify that school districts have the authority to review approved applications for free or reduced-price meals for school-district employees when known or available information indicates school- district employees may have misrepresented their incomes on their applications. In our May 2014 report, we identified opportunities to strengthen oversight of the school-meals programs while ensuring legitimate access, including clarifying use of for-cause verification, studying the feasibility of electronic data matching to verify income, and verifying a sample of households that are categorically eligible for assistance. As described in USDA’s eligibility manual for school meals, school districts are obligated to verify applications if they deem them to be questionable, which is referred to as for-cause verification. We reported in May 2014 that officials from 11 of the 25 school districts we examined told us that they conduct for-cause verification. These officials provided examples of how they would identify suspicious applications, such as when a household submits a modified application— changing income or household members—after being denied, or when different households include identical public-assistance benefit numbers (e.g., if different households provide identical SNAP numbers). However, officials from 9 of the 25 school districts we examined told us that they did not conduct any for-cause verification. For example, one school-district official explained that the school district accepts applications at face value. Additionally, officials from 5 of the 25 school districts told us they only conduct for-cause verification if someone (such as a member of the public or a state agency) informs them of the need to do so on a household. Although not generalizable, responses from these school districts provide insights about whether and under what conditions school districts conduct for-cause verifications. In April 2013, USDA issued a memorandum stating that, effective for the 2013–2014 school year, all school districts must specifically report the total number of applications that were verified for cause. However, the outcomes of those verifications would be grouped with the outcomes of applications that have undergone standard verification. As a result, we reported in May 2014 that USDA would not have information on specific outcomes, which it may need to assess the effectiveness of for-cause verifications and to determine what actions, if any, are needed to improve program integrity. While USDA had issued guidance specific to school- district employees and instructs school districts to verify questionable applications in its school-meals eligibility manual, we found that the guidance did not provide possible indicators or describe scenarios that could assist school districts in identifying questionable applications. Hence, in May 2014, we recommended that USDA evaluate the data collected on for-cause verifications for the 2013–2014 school year to determine whether for-cause verification outcomes should be reported separately and, if appropriate, develop and disseminate additional guidance for conducting for-cause verification that includes criteria for identifying possible indicators of questionable or ineligible applications. USDA concurred with this recommendation and in January 2015 told us that FNS would analyze the 2013–2014 school year data to determine whether capturing the results of for-cause verification separately from the results of standard verification would assist the agency’s efforts to improve integrity and oversight. USDA also said that FNS would consider developing and disseminating additional guidance, as we recommended. In addition to for-cause verification, school districts are required to annually verify a sample of household applications approved for free or reduced-price school-meals benefits to determine whether the household has been certified to receive the correct level of benefits—we refer to this process as “standard verification.”limited to approved applications considered “error-prone.” Error-prone is statutorily defined as approved applications in which stated income is within $100 of the monthly or $1,200 of the annual applicable income- eligibility guideline. Households with reported incomes that are more than $1,200 above or below the free-meals eligibility threshold and more than $1,200 below the reduced-price threshold would generally not be subject to this verification process. In a nongeneralizable review of 25 approved civilian federal-employee household applications for our May 2014 report, we found that 9 of 19 households that self-reported household income and size information were not eligible for free or reduced-price-meal benefits they were receiving because their income exceeded eligibility guidelines. Two of these 9 households stated in their applications annualized incomes that were within $1,200 of the eligibility guidelines and, therefore, could have been selected for standard verification as part of the sample by the district; however, we determined that they were not selected or verified. The remaining 7 of 9 households stated annualized incomes that fell below $1,200 of the eligibility guidelines and thus would not have been subject to standard verification. For example, one household we reviewed submitted a school-meals application for the 2010–2011 school year seeking school-meals benefits for two children. The household stated an annual income of approximately $26,000 per year, and the school district appropriately certified the household to receive reduced-price-meal benefits based on the information on the application. However, we reviewed payroll records and determined that the adult applicant’s income at the time of the application was approximately $52,000—making the household ineligible for benefits. This household also applied for and received reduced-meal benefits for the 2011–2012 and 2012–2013 school years by understating its income. Its 2012–2013 annualized income was understated by about $45,000. Because the income stated on the application during these school years was not within $1,200 per year of the income-eligibility requirements, the application was not deemed error-prone and was not subject to standard verification. Had this application been subjected to verification, a valid pay stub would have indicated the household was ineligible. One method to identify potentially ineligible applicants and effectively enforce program-eligibility requirements is by independently verifying income information with an external source, such as state payroll data. States or school districts, through data matching, could identify households that have income greater than the eligibility limits and follow up further. Such a risk-based approach would allow school districts to focus on potentially ineligible families while not interrupting program access to other participants. Electronic verification of a sample of applicants (beyond those that are statutorily defined as error-prone) through computer matching by school districts or state agencies with other sources of information—such as state income databases or public- assistance databases—could help effectively identify potentially ineligible applicants. In May 2014, we recommended that USDA develop and assess a pilot program to explore the feasibility of computer matching school-meal participants with other sources of household income, such as state income databases, to identify potentially ineligible households—those with income exceeding program-eligibility thresholds—for verification. We also recommended that, if the pilot program shows promise in identifying ineligible households, the agency should develop a legislative proposal to expand the statutorily defined verification process to include this independent electronic verification for a sample of all school-meals applications. USDA concurred with our recommendations and told us in January 2015 that direct-verification computer matching is technologically feasible with data from means-tested programs, and that data from SNAP and other programs are suitable for school-meals program verification in many states. USDA said that FNS would explore the feasibility of using other income-reporting systems for program verification without negatively affecting program access for eligible students or violating statutory requirements. Depending on the results of the pilot program, USDA said that FNS would consider submitting a legislative proposal to expand the statutorily defined verification process, as we recommended. In May 2014, we found that ineligible households may be receiving free school-meals benefits by submitting applications that falsely state that a household member is categorically eligible for the program due to participating in certain public-assistance programs—such as SNAP—or meeting an approved designation—such as foster child or homeless. Of the 25 civilian federal-employee household applications we reviewed, 6 were approved for free school-meals benefits based on categorical eligibility. We found that 2 of the 6 were not eligible for free or reduced- price meals and 1 was not eligible for free meals, although that household may have been eligible for reduced-price meals. For example, one household applied for benefits during the 2010–2011 school year—providing a public-assistance benefit number—and was approved for free-meal benefits. However, when we verified the information with the state, we learned that the number was for medical- assistance benefits—a program that is not included in categorical eligibility for the school-meals programs. On the basis of our review of payroll records, this household’s annualized income of at least $59,000 during 2010 would not have qualified the household for free or reduced- price-meal benefits. This household applied for school-meals benefits during the 2011–2012 and 2012–2013 school years, again indicating the same public-assistance benefit number—and was approved for free-meal benefits. Figure 1 shows the results of our review. Because applications that indicate categorical eligibility are generally not subject to standard verification, these ineligible households would likely not be identified unless they were selected for for-cause verification or as part of the administrative review process, even though they contained inaccurate information. These cases underscore the potential benefits that could be realized by verifying beneficiaries with categorical eligibility. In May 2014, we recommended that USDA explore the feasibility of verifying the eligibility of a sample of applications that indicate categorical eligibility for program benefits and are therefore not subject to standard verification. USDA concurred with this recommendation and told us in January 2015 that FNS would explore technological solutions to assess state and local agency capacity to verify eligibility of a sample of applications that indicate categorical eligibility for school-meals-program benefits. In addition, USDA said that FNS would clarify to states and local agencies the procedures for confirming and verifying the application’s status as categorically eligible, including for those who reapply after being denied program benefits as a result of verification. Chairman Roberts, Ranking Member Stabenow, and Members of the Committee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. For further information on this testimony, please contact Stephen Lord at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jessica Lucas-Judy, Assistant Director; Marcus Corbin; Ranya Elias; Gabrielle Fagan; Colin Fallon; Kathryn Larin; Olivia Lopez; Maria McMullen; and Daniel Silva. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2014, 30.4 million children participated in the National School Lunch Program and 13.6 million children participated in the School Breakfast Program, partly funded by $15.1 billion from USDA. In May 2014, GAO issued a report on (1) steps taken to help identify and prevent ineligible beneficiaries from receiving benefits in school-meal programs and (2) opportunities to strengthen USDA's oversight of the programs. This testimony summarizes GAO's May 2014 report ( GAO-14-262 ) and January 2015 updates from USDA. For the May 2014 report, GAO reviewed federal school-meals program policies, interviewed program officials, and randomly selected a nongeneralizable sample that included 25 approved applications from civilian federal-employee households out of 7.7 million total approved applications in 25 of 1,520 school districts in the Dallas, Texas, and Washington, D.C., regions. GAO performed limited eligibility testing using civilian federal-employee payroll data from 2010 through 2013 due to the unavailability of other data sources containing nonfederal-employee income. GAO also conducted interviews with households. GAO referred potentially ineligible households to the USDA Inspector General. In its 2014 report, GAO recommended that USDA explore the feasibility of (1) using computer matching to identify households with income that exceeds program-eligibility thresholds for verification and (2) verifying a sample of categorically eligible households. USDA generally agreed with the recommendations and has actions under way to address them. In May 2014, GAO reported that the U.S. Department of Agriculture (USDA) had taken several steps to implement or enhance controls to identify and prevent ineligible beneficiaries from receiving school-meals benefits. For example: USDA worked with Congress to develop legislation to automatically enroll students who receive Supplemental Nutritional Assistance Program benefits for free school meals; this program has a more-detailed certification process than the school-meals program. Starting in the 2013–2014 school year, USDA increased the frequency with which state agencies complete administrative reviews of school districts from every 5 years to every 3 years. As part of this process, state agencies review applications to determine whether eligibility determinations were correctly made. In its May 2014 report, GAO identified opportunities to strengthen oversight of the school-meals programs while ensuring legitimate access, such as the following: If feasible, computer matching income data from external sources with participant information could help identify households whose income exceeds eligibility thresholds. As of May 2014, school districts verified a sample of approved applications deemed “error-prone”—statutorily defined as those with reported income within $1,200 of the annual eligibility guidelines—to determine whether the household is receiving the correct level of benefits (referred to as standard verification in this testimony). In a nongeneralizable review of 25 approved applications from civilian federal households, GAO found that 9 of 19 households that self-reported household income and size information were ineligible and only 2 could have been subject to standard verification. Verifying a sample of categorically eligible applications could help identify ineligible households. GAO reported that school-meal applicants who indicate categorical eligibility (that is, participating in certain public-assistance programs or meeting an approved designation, such as foster children) were eligible for free meals and were generally not subject to standard verification. In a nongeneralizable review of 25 approved applications, 6 households indicated categorical eligibility, but GAO found 2 were ineligible.
SSA’s disability programs provide cash benefits to people with long-term disabilities. The DI program provides monthly cash benefits and Medicare eligibility to severely disabled workers; SSI is an income assistance program for blind and disabled people. The law defines disability for both programs as the inability to engage in substantial gainful activity because of a severe physical or mental impairment that is expected to last at least 1 year or result in death. Both DI and SSI are administered by SSA and state disability determination services (DDS). SSA field offices determine whether applicants meet the nonmedical criteria for eligibility and at the DDSs, a disability examiner and a medical consultant (physician or psychologist) make the initial determination of whether the applicant meets the definition of disability. Denied claimants may ask the DDS to reconsider its finding and, if denied again, may appeal to an ALJ within SSA’s Office of Hearings and Appeals (OHA). The ALJ usually conducts a hearing at which applicants and medical or vocational experts may testify and submit new evidence. Applicants whose appeals are denied may request review by SSA’s Appeals Council and may further appeal the Council’s decision in federal court. Between fiscal years 1986 and 1996, the increasing number of appealed cases has caused workload pressures and processing delays. During that time, appealed cases increased more than 120 percent. In the last 3 years alone, average processing time for appealed cases rose from 305 days in fiscal year 1994 to 378 days in fiscal year 1996 and remained essentially the same for the first quarter of fiscal year 1997. In addition, “aged” cases (those taking 270 days or more for a decision) increased from 32 percent to almost 43 percent of the backlog. In addition to the backlog, high ALJ allowances (in effect, “reversals” of DDS decisions to deny benefits) have been a subject of concern for many years. Although the current ALJ allowance rate has dropped from 75 percent in fiscal year 1994, ALJs still allow about two-thirds of all disability claims they decide. Because chances for award at the appeals level are so favorable, there is an incentive for claimants to appeal. For several years, about three-quarters of all claimants denied at the DDS reconsideration level have appealed their claims to the ALJ level. In 1994, SSA adopted a long-term plan to redesign the disability decision-making process to improve its efficiency and timeliness. As a key part of this plan, SSA developed initiatives to achieve similar decisions on similar cases regardless of whether the decisions are made at the DDS or the ALJ level. In July 1996, several of these initiatives, called “process unification,” were approved for implementation by SSA’s Commissioner. SSA expects that process unification will result in correct decisions being made at the earliest point possible, substantially reducing the proportion of appealed cases and ALJ allowance rates as well. Because SSA expects that implementation of its redesigned disability decision-making process will not be completed until after the year 2000, SSA developed a Short Term Disability Project Plan (STDP) to reduce the existing backlog by introducing new procedures and reallocating staff. STDP is designed to expedite processing of claims in a way that will support redesign and achieve some near-term results in reducing the backlog. SSA expects that STDP’s major effect will come primarily from two initiatives—regional screening unit and prehearing conferencing activities. In the screening units, DDS staff and OHA attorneys work together to identify claims that could be allowed earlier in the appeals process. Prehearing conferencing shortens processing time for appealed cases by assigning OHA attorneys to perform limited case development and review cases to identify those that could potentially be allowed without a formal hearing. The plan called for reducing the backlog to 375,000 appealed cases by December 31, 1996. Despite SSA attempts to reduce the backlog through its STDP initiatives, the agency did not reach its goal of reducing this backlog to 375,000 by December 1996. SSA attributes its difficulties in meeting its backlog target to start-up delays, overly optimistic projections of the number of appealed cases that would be processed, and an unexpected increase in the number of appealed cases. The actual backlog in December was about 486,000 cases and has risen in the last few months to 491,000 cases, still about 116,000 over the goal. Although SSA did not reach its backlog goal, about 98,000 more cases may have been added to the backlog if STDP steps had not been undertaken. The contribution made by STDP underscores the need for SSA to continue its short-term effort while moving ahead to address the disability determination process in a more fundamental way in the long term. In addition to the backlog problem, SSA’s decision-making process has produced a high degree of inconsistency between DDS and ALJ awards, as shown in table 1. Although award rates representing DDS decision-making vary by impairment, ALJ award rates are high regardless of the type of impairment. For example, sample data showed that DDS award rates ranged from 11 percent for back impairments to 54 percent for mental retardation. In contrast, ALJ award rates averaged 77 percent for all impairment types with only a smaller amount of variation among impairment types. SSA’s process requires adjudicators to use a five-step sequential evaluation process in making their disability decisions (see table 2). Although this process provides a standard approach to decision-making, determining disability often requires that a number of complex judgments be made by adjudicators at both the DDS and ALJ levels. Social Security Disability: SSA Actions to Reduce Backlogs and Achieve More Consistent Decisions Deserve High Priority Questions asked in the sequential process Is the claimant engaging in substantial gainful activity? Does the claimant have an impairment that has more than a minimal effect on the claimant’s ability to perform basic work tasks and is expected to last at least 12 months? Do the medical facts alone show that the claimant’s impairment meets or equals the medical criteria for an impairment in SSA’s Listing of Impairments? Comparing the claimant’s residual functional capacity with the physical and mental demands of the claimant’s past work, can the claimant perform his or her past work? Based on the claimant’s residual functional capacity and any limitations that may be imposed by the claimant’s age, education, and skill level, can the claimant do work other than his or her past work? As the application proceeds through the five-step process, claimants may be denied benefits at any step, ending the process. Steps 1 and 2 ask questions about the claimant’s work activity and the severity of the claimant’s impairment. If the reported impairment is judged to be severe, adjudicators move to step 3. At this step, they compare the claimant’s condition with a listing of medical impairments developed by SSA. Claimants whose conditions meet or are medically equivalent to the listings are presumed by SSA to be unable to work and are awarded benefits. Claimants whose conditions do not meet or equal the listings are then assessed at steps 4 and 5, where decisions must be made about the claimant’s ability to perform prior work and any other work that exists in the national economy. To do this, adjudicators assess the claimant’s capacity to function in the workplace. evidence, including physician opinions and reported symptoms, such as pain. Mental impairment assessments include judgments about the claimant’s ability to understand, remember, and respond appropriately to supervision and normal work pressures. For physical impairments, adjudicators judge the claimant’s ability to walk, sit, stand, and lift. To facilitate this, SSA has defined five levels of physical exertion ranging from very heavy to sedentary. However, for those claimants unable to perform even sedentary activities, adjudicators may determine that a claimant can perform “less than a full range of sedentary” activities, a classification that often results in a benefit award. Our analysis found that differing functional assessments by DDSs and ALJs are the primary reason for most ALJ awards. Since most DDS decisions use all five steps of the sequential evaluation process before denying a claim, almost all DDS denial decisions appealed to ALJs included such a functional assessment. On appeal, the ALJ also follows the same sequential evaluation process as the DDS and also assesses the claimant’s functional abilities in most awards they make. Data from SSA’s ongoing ALJ study indicate that ALJs are much more likely than DDSs to find that claimants have severe limitations in functioning in the workplace (see table 3). contrast, reviewers using the DDS approach found that less than 6 percent of the cases merited this classification. Functional assessment also played a key role in a 1982 SSA study, which controlled for differences in evidence. This study indicated that DDS and ALJ decisionmakers reached different results even when presented with the same evidence. As part of the study, selected cases were reviewed by two groups of reviewers—one group reviewing the cases as ALJs would and the other reviewing the cases as DDSs would. Reviewers using the ALJ approach concluded that 48 percent of the cases should have received awards, while reviewers using the DDS approach concluded that only 13 percent of those same cases should have received awards. The use of medical expertise appears to influence the decisional differences at the DDS and ALJ levels. At the DDS level, medical consultants are responsible for making functional assessments. In contrast, ALJs have the sole authority to determine functional capacity and often rely on claimant testimony and the opinions of treating physicians. Although ALJs may call on independent medical experts to testify, our analysis shows that they do so in only 8 percent of the cases resulting in awards. To help reduce inconsistency, SSA issued nine rulings on July 2, 1996, which were written to address pain and other subjective symptoms, treating source opinions, and assessing functional capacity. SSA also plans to issue a regulation to provide additional guidance on assessing functional capacity at both the DDS and ALJ levels, specifically clarifying when a “less than sedentary” classification is appropriate. In addition, based on the nine rulings, SSA completed nationwide process unification training of over 15,000 adjudicators and quality reviewers between July 10, 1996, and February 26, 1997. In the training, SSA emphasized that it expects the “less than sedentary” classification would be used rarely. In the longer term, SSA plans to develop a simplified decision-making process, which will expand the role of functional capacity assessments. Because differences in functional capacity assessments are the primary reason for inconsistent decisions, SSA should proceed cautiously with its plan to expand the use of such assessments. Procedures at the DDS and ALJ levels limit the usefulness of the DDS decision as a foundation for the ALJ decision. Often, ALJs are unable to rely on DDS decisions because they lack supporting evidence and explanations of the reasons for denial, laying a weak foundation for the ALJ decision if the case is appealed. Moreover, although SSA requires ALJs to consider the DDS medical consultant’s assessment of functional capacity, procedures at the DDS level do not ensure that such assessments are clearly explained. In a 1994 study, SSA found that written explanations of critical issues at the DDS level were inadequate in about half of the appealed cases that turned on complex issues. Without a clear explanation of the DDS decision, the ALJ could neither effectively consider it nor give it much weight. At the ALJ level, claimants are allowed to claim new impairments and submit new or additional evidence, which also affects consistency between the two levels. Moreover, in about 10 percent of cases appealed to the ALJ level, claimants switch their primary impairment from a physical claim to a mental claim. In addition, data from a 1994 SSA study show that claimants submit additional evidence to the ALJ in about three-quarters of the sampled cases and that additional evidence was an important factor in 27 percent of ALJ allowances. To address the documentation issues, SSA plans to take steps to ensure that DDS decisions are better explained and are based on a more complete record so that they are more useful if appealed. On the basis of feedback during the process unification training, SSA plans further instructions and training in May 1997 for the DDSs on how and where in the case files they should explain how they reached their decisions. SSA also plans to issue a regulation clarifying the weight given to the DDS medical consultants’ opinions at the ALJ level. To deal with the potential effect of new evidence, SSA plans to return to the DDSs about 100,000 selected cases a year for further consideration when new evidence is introduced at the ALJ level. In cases where the DDS would award benefits, the need for a more time-consuming and costly ALJ decision would be avoided. SSA plans to implement this project in May 1997. Moreover, SSA’s decision to limit such returns to about 100,000 cases may need to be reassessed in light of the potential benefits that could accrue from this initiative. Although SSA has several quality review systems to examine disability decisions, none is designed to identify and reconcile factors that contribute to differences between DDS and ALJ decisions. For example, although ALJs are required to consider the opinion of the DDS medical consultant when making their own assessment of a claimant’s functional capacity, such written DDS opinions are often lacking in the case files. Quality reviews at the DDS level do not focus effectively on whether or how well these opinions are explained in the record, despite the potential importance of such medical opinion evidence at the ALJ level. Moreover, SSA reviews too few ALJ awards to ensure that ALJs give appropriate consideration to the medical consultants’ opinions or to identify means to make them more useful to the ALJs. Feedback on these issues could help improve consistency by making the DDS decision a more useful part of the overall adjudication process. To improve consistency, SSA is completing work on a notice of proposed rulemaking, with a target issue date of August 1997 for a final regulation, to establish the basis for reviewing ALJ awards, which would require ALJs to take corrective action on remand orders from the Appeals Council before benefits are paid. SSA has just started conducting preliminary reviews of ALJ awards, beginning with 200 cases a month. After the regulation is issued, they plan to increase the number of cases per month. SSA has set a first-year target of 10,000 cases to be reviewed, but this reflects only about 3 percent of approximately 350,000 award decisions made by ALJs in 1996. Ultimately, SSA plans to implement quality review measures to provide consistent feedback on the application of policy. By doing this, the agency hopes to ensure that the correct decision is made at the earliest point in the process. 18 who are likely to improve and for all low-birthweight babies within the first year of life. In addition, SSA is required to redetermine, using adult criteria, the eligibility of all 18-year-olds on SSI beginning on their 18th birthdays and to readjudicate 332,000 childhood disability cases by August 1997. Finally, thousands of noncitizens and drug addicts and alcoholics could appeal their benefit terminations, further increasing workload pressures. Despite SSA’s Short Term Disability Project Plan, the appealed case backlog is still high. Nevertheless, because the backlog would have been even higher without STDP, SSA will need to continue its effort to reduce the backlog to a manageable level until the agency, as a part of its long-term redesign effort, institutes a permanent process to ensure timely and expeditious disposition of appeals. In addition, SSA is beginning to move ahead with more systemwide changes in its redesign of the disability claims process. In particular, it is on the verge of implementing initiatives to redesign the process, including ones for improving decisional consistency and the timeliness of overall claims processing. However, competing workload demands could jeopardize SSA’s ability to make progress in reducing inconsistent decisions. We urge the agency to follow through on its initiatives to address the long-standing problem of decisional inconsistency with the sustained attention required for this difficult task. To do so, SSA, in consultation with this Subcommittee and others, will need to sort through its many priorities and do a better job of holding itself accountable for meeting its deadlines. Otherwise, plans and target dates will remain elusive goals and may never yield the dual benefits of helping to restore public confidence in the decision-making process and contributing to permanent reductions in backlog. Mr. Chairman, this concludes my prepared statement. At this time, I will be happy to answer any questions you or the other Subcommittee members may have. For more information on this testimony, please call Cynthia Bascetta, Assistant Director, at (202) 512-7207. Other major contributors are William Hutchinson, Senior Evaluator; Carol Dawn Petersen, Senior Economist; and David Fiske, Ellen Habenicht, and Carlos Evora, Senior Evaluators. Appealed Disability Claims: Despite SSA’s Efforts, It Will Not Reach Backlog Reduction Goal (GAO/HEHS-97-28, Nov. 21, 1996). Social Security Disability: Backlog Reduction Efforts Under Way; Significant Challenges Remain (GAO/HEHS-96-87, July 11, 1996). Social Security Disability: Management Action and Program Redesign Needed to Address Long-Standing Problems (GAO/T-HEHS-95-233, Aug. 3, 1995). Disability Insurance: Broader Management Focus Needed to Better Control Caseload (GAO/T-HEHS-95-233, May 23, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Social Security Disability: SSA Quality Assurance Improvements Can Produce More Accurate Payments (GAO/HEHS-94-107, June 3, 1994). Social Security: Most of Gender Difference Explained (GAO/HEHS-94-94, May 27, 1994). Social Security: Disability Rolls Keep Growing, While Explanations Remain Elusive (GAO/HEHS-94-34, Feb. 8, 1994). Social Security: Increasing Number of Disability Claims and Deteriorating Service (GAO/HRD-94-11, Nov. 10, 1993). Social Security: Rising Disability Rolls Raise Questions That Must Be Answered (GAO/T-HRD-93-15, Apr. 22, 1993). Social Security Disability: Growing Funding and Administrative Problems (GAO/T-HRD-92-28, Apr. 27, 1992). Social Security: Racial Difference in Disability Decisions Warrants Further Investigation (GAO/HRD-92-56, Apr. 21, 1992). Social Security: Results of Required Reviews of Administrative Law Judge Decisions (GAO/HRD-89-48BR, June 13, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Social Security Administration's (SSA) actions to reduce the current backlog of cases appealed to the agency's administrative law judges, focusing on: (1) how functional assessments, differences in procedures, and quality review contribute to inconsistent results between different decisionmakers; and (2) SSA'a strategy to obtain greater decisional consistency. GAO noted that: (1) GAO's work shows that while SSA has developed broad-based plans to improve the management of its disability programs, many initiatives are just beginning and their effectiveness can be assessed only after a period of full-scale implementation; (2) for example, in the short term, SSA has taken action to try to deal with the backlog crisis, but it is still about 116,000 cases over its December 1996 goal of 375,000 cases; (3) in the longer term, SSA needs to come to grips with the systemic factors causing inconsistent decisions, which underlie the current high level of appealed cases and, in turn, the backlog crisis; (4) for example, GAO found that differences in assessments of functional capacity, different procedures, and weaknesses in quality reviews contribute to inconsistent decisions; and (5) although SSA is on the verge of implementing initiatives to deal with these factors, GAO is concerned that other congressionally mandated workload pressures, such as significantly increasing the number of continuing disability reviews and readjudicating childhood cases, could jeopardize the agency's ability to move ahead with its initiatives to reduce inconsistent decisions.
Although many of the larger agencies that transferred to DHS have been able to obtain unqualified or “clean” audit opinions on their annual financial statements, most employed significant effort and manual work- arounds to do so in order to overcome a history of poor financial management systems and significant internal control weaknesses. Furthermore, some of the entities that transferred may also have weaknesses not yet identified or reported on merely because the problems were considered small or immaterial in relation to their large parent departments, such as the Department of Defense or the U.S. Department of Agriculture. Such weaknesses may become evident now that these smaller agencies are proportionately larger as a part of DHS, add to the known extensive existing challenges, and may therefore be subjected to increased levels of audit scrutiny. Cumulatively, these weaknesses and the efforts needed to resolve them to achieve sound financial management and business processes are an important reason for amending the CFO Act to include DHS and measuring DHS’s financial management systems and internal control against the same important financial reform legislation and performance expectations as other federal departments and agencies. DHS, like other federal agencies, has a stewardship obligation to prevent fraud, waste, and abuse, to use tax dollars appropriately, and to ensure financial accountability to the President, the Congress, and the American people. For the most part, DHS’s component entities are using legacy financial management systems that have a myriad of problems, such as disparate, nonintegrated, outdated, and inefficient systems and processes. DHS will need to focus on building future systems as part of its enterprise architecture approach to ensure an overarching framework for the agency’s integrated financial management processes. Plans and standard accounting policies and procedures must be developed and implemented to bridge the many financial environments in which inherited agencies currently operate to an integrated DHS system. Another significant challenge for DHS is fixing the previously identified weaknesses that the agencies bring with them to DHS, a number of which I will now discuss. While receiving unqualified audit opinions on its fiscal year 2001 and 2002 financial statements, the former INS under the Department of Justice (DOJ) faces numerous challenges in achieving a sound financial management environment. Although INS was abolished and split into multiple bureaus within DHS, its prior financial management weaknesses will still need to be addressed and could be further complicated by this realignment. For fiscal year 2002, INS’s financial statement auditors reported three material internal control weaknesses and that its systems were not in substantial compliance with FFMIA. Specifically, auditors noted limitations in the design and operation of INS’s financial accounting system, thereby requiring it to use stand-alone systems or obtain the required financial information via manual processes and nonroutine adjustments as part of the financial statement preparation process. Having systems that can routinely produce information for financial reporting on demand for day-to- day decision making is one of the expected results of the President’s Management Agenda, as well as one of the goals of FFMIA. In addition, for both fiscal years 2001 and 2002, auditors reported that INS did not have a reliable system for providing regular, timely data on the numbers of completed and pending immigration applications, and the associated collections of fees valued at nearly $1 billion for fiscal year 2002. Accordingly, INS was not able to accurately and regularly determine fees it earns without relying on an extensive servicewide, year-end physical count of over 5.4 million pending applications, as was the case in fiscal year 2002. INS has been developing a new tracking system to facilitate its inventory process. However, until the new system is implemented, INS must rely on inefficient manual processes that significantly disrupt its operations. These and other inherent weaknesses in INS’s financial management process limit its ability to produce useful, accurate, and timely financial information. Despite the importance and prevalence of information technology (IT) systems in accomplishing its core missions, INS has not yet established and implemented effective controls for managing its IT resources. The root cause of INS’s systems problems has been an absence of effective enterprise architecture management and an IT investment management process. To address such weaknesses, INS has been developing an enterprise architecture, including a current and target architecture, as well as a transition plan. Similarly, INS has taken steps to implement rigorous and disciplined investment management controls. However, with the transfer to DHS and the splitting of INS, these plans will have to be reanalyzed, further delaying implementation of effective systems and complicating DHS’s ability to produce reliable, timely, and accurate financial statements and information. FEMA, the only CFO Act agency to transfer in its entirety to DHS, faces several major financial management challenges, in spite of receiving an unqualified opinion on its fiscal year 2002 financial statements. In fiscal year 2002 FEMA’s auditors reported six material internal control weaknesses and that FEMA’s financial management systems were not in substantial compliance with the requirements of FFMIA. One major weakness was FEMA’s inability to efficiently prepare accurate financial statements as called for in the President’s Management Agenda. For example, auditors reported that for fiscal year 2002, FEMA did not have an integrated financial reporting process that could generate financial statements as a byproduct of already existing processes, and that its financial statements were prepared late and required significant revisions. In addition, auditors reported in fiscal year 2001 and again in fiscal year 2002 that FEMA did not have adequate accounting systems and processes to ensure that all property, plant, and equipment were properly recorded, accurately depreciated, and tracked in accordance with its polices and applicable federal accounting standards. As a result, FEMA’s property management system cannot track items to supporting documentation or to a current location. Furthermore, FEMA lacks procedures to ensure that (1) equipment is consistently recorded on either a system or a component basis, (2) procedures are in place to ensure that property inventories are performed properly, and (3) all equipment is entered into its personal property management system. As a result, there is an increased risk that equipment and other property could be lost, stolen, or improperly recorded in its accounting records. Since FEMA was the only agency to transfer to DHS in its entirety, it, unlike all of the other agencies, is left without a legacy department to prepare financial statements for the first 5 months of activity for fiscal year 2003 or an Office of Inspector General (OIG) to audit them, leaving FEMA’s financial management information for the first 5 months of this fiscal year vulnerable to omissions, errors, and ultimately material misstatements. Given the weaknesses in, among other things, FEMA’s property controls, we are initiating a review of FEMA’s disbursement activity and property management controls covering this 5-month period. We will keep this Subcommittee informed of our progress in this review. Until corrective actions are implemented to address weaknesses, FEMA will not be able to achieve effective financial accountability or ensure that property is properly accounted for. In fiscal year 2002, Customs under Treasury received a qualified opinion on a limited scope review of its internal controls. This qualified opinion was due to the identification of four material weaknesses in Customs’ internal controls by its independent auditors. For example, auditors reported that Customs’ financial systems did not capture all transactions as they occurred during the year; did not record all transactions properly; were not fully integrated; and did not always provide for essential controls with respect to override capabilities. As a result, extensive manual procedures and analysis were required to process certain routine transactions and prepare year-end financial statements. Customs, which typically collects and processes over $23 billion in fees annually, was found to have poor collection procedures throughout the agency. Ongoing weaknesses in the design and operation of Customs’ controls over trade activities and financial management and information systems continue to inhibit the effective management of these activities and protection of trade revenue. For example, auditors reported that Customs’ Automated Commercial System could not provide summary information on the total unpaid assessments for duties, taxes, and fees by individual importer. The system also could not generate periodic management information on outstanding receivables, the age of receivables, or other data necessary for managers to effectively monitor collection procedures. Such a capability would allow Customs to give managers timely access to program revenue information and more effectively present performance measures, which is critical for implementation of the President’s Management Agenda. Despite Customs’ progress in implementing recommendations GAO and others have made over the years, numerous weaknesses continue to hinder progress toward developing Customs’ planned import system, the Automated Commercial Environment (ACE). ACE is intended to replace the current system used for collecting import-related data and ensuring, among other things, that trade-related revenue is properly collected and allocated. To ensure proper implementation of these initiatives, DHS’s management must continue to provide a sustained level of commitment to its successful implementation. Until this system is fully implemented, billions in trade-related revenue will continue to be tracked by systems with inadequate controls. In addition, like INS, Customs faces additional financial management challenges because it was split into various components. TSA was created by the Aviation and Transportation Security Act under the Department of Transportation (DOT) in November 2001, to develop transportation security policies and programs that contribute to providing secure transportation for the American public. Despite its short history, the former TSA brings to DHS numerous financial management issues. In fiscal year 2002, auditors reported five material weaknesses and that TSA’s systems were not in substantial compliance with FFMIA. Specifically, auditors found that TSA management had not established written accounting policies and procedures to properly perform TSA’s financial management and budgeting functions during fiscal year 2002. This is an example of what can happen when a newly created entity does not thoroughly develop and implement standard accounting policies and procedures. DHS should carefully review TSA’s weaknesses to avoid experiencing them on a departmentwide basis. Auditors also reported that TSA did not maintain complete and accurate records of its passenger and baggage screening equipment, most notably its Explosive Detection System (EDS) equipment. For example, a significant amount of fixed assets were found to not be recorded in the financial statements and an adjustment of approximately $149 million was made after year-end to properly record construction in progress for the manufacture of EDS equipment. Until such weaknesses are resolved, millions of dollars spent on new equipment and other fixed assets could go unaccounted for or be improperly recorded, leaving TSA and DHS vulnerable to fraud, waste, and abuse. Another weakness reported by DOT’s OIG was TSA’s inadequate controls over security screener contracts. Policies and procedures were not established to provide an effective span of control to monitor contractor costs and performance. This lack of oversight enabled contractors to charge TSA up to 97 percent more than the contractors charged air carriers prior to the federalization of the screener workforce. This weakness provides further evidence of the importance of carefully documenting policies and procedures early in the implementation of a new organization. Established in 1998, the former Office of Domestic Preparedness (ODP) under DOJ’s Office of Justice Programs provides grant funds and direct support to, among other things, help address the equipment, training, and technical assistance needs of state and local jurisdictions for responding to terrorism and terrorist-related activities. Since its inception, auditors have reported deficiencies in ODP’s ability to administer grant funds. In fiscal year 2002, we reported grant management as one of DOJ’s major performance and accountability challenges. DOJ’s OIG has found that while millions of dollars had been awarded, the funds were not awarded expeditiously, and grantees were very slow to spend the requested monies. According to the OIG, more than half of the monies requested and granted over the past few years remained unspent and some of the equipment purchased by state and local jurisdictions was unavailable for use because grantees did not properly distribute the equipment, could not locate it, or were inadequately trained to use it. Since the DOJ OIG reported on this issue in fiscal year 2002, DHS has released more than $4.4 billion in grants to state and local governments and private sector organizations. This increased level of grants will only exacerbate these problems unless DHS works with grantees to improve the accountability over these funds. Unlike many of the larger agencies that transferred to DHS, the Coast Guard did not have a stand-alone financial statement audit, but was audited as part of DOT’s consolidated audit. Although the auditors for DOT have not reported significant financial management weaknesses at the Coast Guard in recent years, the Coast Guard still uses DOT’s Departmental Accounting and Financial Information System, which, among other things, was unable to produce auditable financial statements based on the information within the system. In addition, we have listed the Coast Guard as part of DHS’s major management challenges due to its dual missions of maritime safety and homeland security. Concerns have also been reported regarding the Coast Guard’s Deepwater Procurement Project, which currently has an estimated cost of $17 billion over 20 years. It is intended to replace or modernize by 2022 all assets used in missions that generally occur offshore. However, due to the events of September 11th and the Coast Guard’s expanded role in homeland security, additional project requirements have been identified, including accelerating the project to be completed in 10 years. These changes may result in increased annual funding needs for the project, thus increasing the vulnerability to ineffective and inefficient use of funds. The Secret Service, formerly under the Department of the Treasury (Treasury), has also not had a stand-alone financial statement audit, but was audited as part of Treasury’s consolidated audit. Although from an audit perspective the Secret Service was relatively small in relation to the Internal Revenue Service and Bureau of the Public Debt at Treasury, its missions of protecting the President and investigating financial crimes are sensitive. Auditors for Secret Service may identify internal control weaknesses that were not previously known, but may now be identified since the Secret Service is proportionately a larger component of DHS than it was under Treasury, and may therefore be subjected to increased levels of audit scrutiny. Aside from the known weaknesses at the 7 larger component agencies comprising DHS, some of the 15 smaller entities that transferred to DHS may also have weaknesses not previously identified. As with the Secret Service, these entities may be proportionately more significant at DHS than they were at their legacy departments. In addition, once combined, certain areas may be cumulatively subject to more audit scrutiny than when they were dispersed throughout other departments. Any such weaknesses will only exacerbate the extensive existing challenges. DHS plans to prepare financial statements for the 7 months ending September 30, 2003. We support DHS’s decision to do so, but recognize that it will be very challenging given the problems DHS inherited, compounded by the additional complexity of merging a number of diverse entities, which literally has had to hit the ground running from day one. Obtaining a consolidated DHS financial statement audit for that same period will be equally challenging, but also worthwhile. Since DHS is a new entity, its auditors have already begun performing audit procedures on beginning balances (i.e., transferred balances) as of March 1, 2003, the activity for the 7 months ending September 30, 2003, and ending balances. The transfer date of March 1 represents a unique challenge because it does not fall on the end of a typical accounting period, such as the end of the fiscal year or reporting quarter. In addition, legacy departments’ goals of reaching accelerated reporting dates for fiscal year 2003 may be impaired if DHS cannot provide intragovernmental information needed by these departments on time. OMB and Treasury require agencies to reconcile selected intragovernmental activity and balances with their “trading partners” (i.e., other agencies) and to report on the extent and results of intragovernmental activity and balance reconciliation efforts. This information is necessary, not only for the agencies’ financial statements and reports, but also for the U.S. Consolidated Financial Statements. These are unique challenges that must be addressed to ensure that accounts and amounts transferred to DHS are complete and accurate and that legacy departments’ reporting is not negatively impacted. Any significant problems encountered could also negatively impact the preparation and audit of the U.S. government’s fiscal year 2003 financial statements. In the longer term, DHS can only overcome its many challenges if it establishes systems, processes, and controls that help to ensure effective financial management and insists on the adherence to strong financial practices. In addition to addressing the many ongoing challenges existing in the programs of incoming agencies, DHS will need to focus on building future systems as part of its enterprise architecture approach to ensure an overarching framework for the agency’s integrated financial management processes. Plans and standard accounting policies and procedures must be developed and implemented to bridge these financial environments into an integrated DHS system. Mr. Chairman, I would now like to discuss steps DHS should take to establish sound financial management and business processes. Successful financial management of DHS will depend on the department producing financial information that provides useful information for executive decision making. In April 2000, we issued an executive guide that provided guidance in creating value through financial management. After studying the financial management practices and improvement efforts of nine leading private and public sector finance organizations, we identified several success factors, practices, and outcomes associated with world- class financial management. The organizations we studied include The Boeing Company, Chase Manhattan Bank, General Electric Company, Hewlett-Packard, Owens Corning, Pfizer Inc., and the states of Massachusetts, Texas, and Virginia. First and foremost, establishing the following goals is key to developing a world-class finance organization with sound financial management and business processes: (1) make financial management an entitywide priority, (2) redefine the role of the finance organization, (3) provide meaningful information to decision makers, and (4) build a team that delivers results. I will discuss each of these goals in more detail below, including several best practices that are critical in meeting these goals. These practices lead to finance organizations that provide timely information that is relevant to management, useful in the decision-making process, and adds value to the organization. Since it is a newly created entity, DHS has a unique opportunity to implement the identified practices when developing financial policies and activities to establish sound financial management and business processes. Based on our study of world-class financial organizations, making financial management an entitywide priority is encouraged through the following best practices: (1) providing clear, strong, executive leadership, (2) building a foundation of control and accountability, and (3) using training to change the culture and engage line managers. Top leadership involvement is essential for a successful realignment of this magnitude. Top leadership is responsible for allocating the resources needed to improve financial management and for building and maintaining the organization’s commitment to doing business in a new way. The CFO Act established the position of CFO in 24 agencies (app. I lists the original 24 CFO Act agencies—FEMA has transferred to DHS since the act was enacted) in the federal government. These CFOs are given oversight authority regarding financial management matters and are responsible for ensuring that sound financial management is in place. As you know, DHS is not currently subject to the provisions of the CFO Act, and thus has no legal requirement to comply with its provisions. Although Secretary Ridge pledged financial management as a priority in his May 1, 2003, testimony, passage of H.R. 2886, which would amend the CFO Act to include DHS, is important to ensure the department’s long-term commitment to establishing sound financial management and business processes. Further, as DHS continues to integrate its 22 entities, it must build a strong overall foundation of control and accountability. Management should begin by considering any significant control issues with agencies that are being integrated to form DHS, many of which I have already highlighted. These issues must be addressed within the specific agencies, as well as departmentwide to ensure they do not continue to be control issues within the newly formed department. Additionally, increases in accountability should be encouraged through the production of financial and performance reports for major programs on a regular and frequent basis to help in the decision-making process and strategic planning. Ultimately, the foundation for regular and frequent reporting will be through development of an integrated financial management system—one capable of capturing data at an appropriate level of detail and producing relevant and reliable information for users based on their needs. In the case of DHS, the challenge of combining, integrating, modernizing, and in some cases replacing the systems of many disparate agencies will require careful planning if the conversion is to be successful. As discussed earlier, many of the larger agencies that transferred to DHS have a history of poor systems and inadequate financial management. In order to establish sound financial management and business processes, we found that world-class finance organizations redefined the role of the finance organization and implemented an integrated financial management structure that: (1) assessed the finance organization’s role in meeting the department’s mission, (2) maximized the efficiency of day-to-day accounting activities, and (3) organized the finance organization to add value. The ever-increasing competition for resources requires careful allocation of funds. Without the support of an effective finance organization, program managers may not be able to determine costs associated with government activities, defend those costs, or identify the benefits derived from them. The finance organization must understand the department’s mission and be able to provide information in support of that mission. Of key importance is the ability of the finance organization to efficiently complete routine accounting activities, thus freeing resources to focus on other finance- related priorities that are in support of the department’s mission. As I previously discussed, many of the larger agencies that transferred into DHS spend significant time preparing financial statements using manual work- arounds and have a history of poor financial management systems and significant internal control weaknesses. Such a time-consuming method of routine financial statement preparation does not allow for efficient use of finance staff. As DHS develops its financial management and businesses processes, it should focus on developing the abilities to (1) efficiently and effectively complete routine processing activities and (2) compile the data needed to measure performance so that information is available to management on a day-to-day basis. The overarching goal of the President’s Management Agenda is the improvement of government performance. The finance organization must play a pivotal role in providing decision makers with the information they need to measure performance. To efficiently and effectively provide reliable information to decision makers, we identified three best practices in our study of world-class finance organizations: (1) develop systems that support the partnership between finance and operations, (2) reengineer processes in conjunction with new technology, and (3) translate financial data into meaningful information. To help agencies set goals and measure performance, the Congress enacted the Government Performance and Results Act (GPRA) in 1993. As part of GPRA, agencies, including DHS, are required to prepare a 5-year performance plan and annual performance reports. These required reports provide a strategic planning and management framework intended to improve federal performance and hold agencies accountable for achieving results. GPRA was intended, in part, to improve congressional decision making by giving the Congress comprehensive and reliable information on the extent to which federal programs are fulfilling their statutory intent. Additionally, the President’s Management Agenda includes improved financial management performance as one of the five governmentwide management goals. This initiative is aimed at ensuring that federal financial systems produce accurate and timely information to support operating, budget, and policy decisions. The finance organization is a key component of a department’s ability to meet its requirements under GPRA and the objectives of the President’s Management Agenda. Over the years, the federal government has had difficulty attracting and retaining talented financial management officials. Improving financial performance is difficult without experienced leadership and staff who are committed to success. Our study of several world-class finance organizations indicated the following as best practices to build a team that can deliver results: (1) develop a finance team with the right mix of skills and competencies, and (2) attract and retain talent. Given the current demand on resources and the competition for qualified employees, developing and retaining a talented finance team that is capable of meeting the changing demands of the federal financial workplace is an important goal. The lack of highly qualified financial management professionals can hamper effective federal financial management operations. The CFO Act requires OMB’s Deputy Director for Management to develop and maintain qualification standards for agency CFOs and their deputies; provide advice to agencies on the qualification, recruitment, performance, and retention of financial management personnel; and assess the adequacy of financial management staffs throughout the government. Additionally, the CFO Act places responsibility with the CFO to recruit, select, and train finance personnel. To help department leaders manage their people and integrate human capital considerations into daily decision making and the program results they seek to achieve, we developed a strategic human capital model. This model is applicable to department leadership as a whole but is also applicable to finance organization leadership as they seek to attract, develop, and retain talent. The two critical success factors identified in our model to assist organizations in creating results-oriented cultures are (1) linking unit and individual performance to organizational goals and (2) involving employees in the decision-making process. Agency leaders have other opportunities for displaying their commitment to human capital. Continuous learning efforts, employee-friendly workplace policies, competency-based performance appraisal systems, and retention and reward programs are all ways in which agencies can value and invest in their human capital. The sustained provision of resources for such programs can show employees and potential employees the commitment agency leaders have to strategic human capital management. DHS should adopt these success factors in building a financial management team that delivers results. It is well recognized that mergers of the magnitude of DHS carry significant risks, including lost productivity and inefficiencies. Successful transformations of large organizations generally can take from 5 to 7 years to achieve. Necessary management capacity, communication and information systems, as well as sound financial management and business processes must be established. Though creating and maintaining these structures will be demanding and time consuming, it is necessary to effectively implement the national homeland security strategy. Over the past several months, we have met with DHS’s CFO, Acting Inspector General and Assistant Inspector General for Audits, and its independent auditors performing its financial statement audit for 2003. We are committed to working in a coordinated effort with the Congress, DHS, and its auditors to provide advice to DHS on developing a sound financial management structure that will facilitate, and not hamper, its mission of securing the homeland. We believe that passage of H.R. 2886 will further assist DHS in meeting this goal. Mr. Chairman, as you know, H.R. 2886 as introduced on July 24, 2003 would amend the CFO Act to (1) add DHS as a CFO Act agency and remove FEMA as a CFO Act agency, (2) require DHS to obtain an audit opinion on its internal controls, and (3) require DHS to include program performance information in its performance and accountability reports. In addition, H.R. 2886 as introduced would have provided a waiver allowing DHS to forego a financial statement audit for fiscal year 2003. We understand an agreement has been reached to remove this waiver from the proposed legislation. DHS’s 2003 audit is already underway and the department has stated it is committed to obtaining this audit. The waiver option is, therefore, no longer needed, and we support dropping the provision from H.R. 2886. We supported passage of the CFO Act in 1990 and continue to strongly support its objectives of (1) giving the Congress and agency decision makers reliable financial, cost, and performance information both annually and, most important, as needed throughout the year to assist in managing programs and making difficult spending decisions, (2) dramatically improving financial management systems, controls, and operations to eliminate fraud, waste, abuse, and mismanagement and properly safeguard and manage the government’s assets, and (3) establishing effective financial organizational structures to provide strong leadership. Achieving these goals is critical for establishing effective financial management, and we fully support amending the CFO Act to include DHS. In developing the CFO Act, the Congress viewed the CFO as being a critical player in the management of an agency. At the time, financial management was not a priority in most federal agencies and was all too often an afterthought. All too often, the top financial management official wore many hats, which left little time for financial management; did not necessarily have any background in financial management; and focused primarily on the budget. By establishing statutorily the position of CFO, requiring that the person in the position have strong qualifications and a proven track record in financial management, and giving this person status as a presidential appointee, the Congress sought to change the then existing paradigm. Of the 24 agencies named in the 1990 CFO Act, 16 were designated as Level IV, Presidential appointee Senate confirmation positions and eight were career positions. Today, CFOs have become influential across government and the quality of the appointees has borne out the wisdom of the Congress’s insistence that this position be elevated (meaning it reported to the top and had standing with other top officials). We have seen an evolution of the CFO position and a quantum change in the expertise and abilities of CFOs and the attractiveness of this position to someone having the type of proven track record in financial management that is needed in the federal government. In the end, the key attribute is the quality of the person in the position to affect change and carry out the role of CFO and whether the head of the agency supports the CFO and empowers that person to do the job needed. Appointment of the CFO by the President, subject to Senate confirmation, is one way to help ensure that the goals of the CFO Act are met and that has proven itself over time. The CFO Act, as expanded by the Government Management Reform Act of 1994, also requires agencies to prepare and have audited financial statements. The Congress added further emphasis to the importance of sound financial management when it enacted FFMIA. Under the Accountability of Tax Dollars Act of 2002, DHS, as an executive branch agency with budget authority greater than $25 million, would be required to obtain annual financial statement audits; however, its auditors would not have to report on compliance with FFMIA. Although DHS has appropriately contracted with independent auditors to report on its systems compliance with FFMIA for fiscal year 2003, it is not legally required to do so. FFMIA requires that agencies implement and maintain financial management systems that substantially comply with (1) federal systems requirements, (2) federal accounting standards, and (3) the U.S. Government Standard General Ledger. The ability to produce the data needed to efficiently and effectively manage the day-to-day operations of the federal government and provide accountability to taxpayers has been a long-standing challenge at most federal agencies. As we discussed earlier, auditors reported that many of the larger agencies that transferred to DHS were not in substantial compliance with FFMIA prior to their transfer to DHS. Given these preexisting compliance issues, in addition to issues that may arise with system integration initiatives, it is critical that DHS be legally required to comply with these important financial management reforms. Current OMB guidance for audits of government agencies and programs requires auditor reporting on internal control, but not at the level of providing an opinion on internal control effectiveness. However, we have long believed and the Comptroller General has gone on record in congressional testimony that auditors have an important role in providing an opinion on the effectiveness of internal control over financial reporting and compliance with laws and regulations in connection with major federal departments and agencies. For a number of years, we have provided separate opinions on internal control effectiveness for the federal entities that we audit because of the importance of internal control to protecting the public’s interest. Specifically, we provide separate opinions on internal controls and compliance with laws and regulations for our audits of the U.S. government’s consolidated financial statements, the financial statements of the Internal Revenue Service and Federal Deposit Insurance Corporation, the Schedules of Federal Debt managed by the Bureau of the Public Debt, and numerous small entities’ operations and funds. Our reports and related efforts have engendered major improvements in internal control. As part of the annual audit of our own financial statements, we practice what we recommend to others and contract with an independent public accounting firm for both an opinion on our financial statements and an opinion on the effectiveness of our internal control over financial reporting and compliance with laws and regulations. Our goal is to lead the way in establishing the appropriate level of auditor reporting on internal control for federal agencies, programs, and entities receiving significant amounts of federal funding. Additionally, three agencies, Social Security Administration (SSA), General Services Administration (GSA), and the Nuclear Regulatory Commission (NRC) voluntarily obtain separate opinions on internal control effectiveness from their auditors, which is commendable. Another consideration as the Congress decides whether to enact new requirements is that an opinion on internal controls is what has been prescribed by the Congress for publicly traded corporations. A final rule issued by the Securities and Exchange Commission in June 2003 and effective in August 2003 provides guidance for implementation of Section 404 of the Sarbanes-Oxley Act of 2002, which requires publicly traded companies to establish and maintain an adequate internal control structure and procedures for financial reporting and include in its annual report a statement of management’s responsibility for and management’s assessment of the effectiveness of those controls and procedures in accordance with standards adopted by the Securities and Exchange Commission. The final rule defines this requirement and requires applicable companies to obtain a report in which a registered public accounting firm expresses an opinion, or states that an opinion cannot be expressed, concerning management’s assessment of the effectiveness of internal controls over financial reporting. Auditor reporting on internal control is a critical component of monitoring the effectiveness of an organization’s accountability. GAO strongly believes that this is especially important for very large, complex, or challenged entities. By giving assurance about internal control, auditors can better serve their clients and other financial statement users and better protect the public interest by having a greater role in providing assurances of the effectiveness of internal control in deterring fraudulent financial reporting, protecting assets, and providing an early warning of internal control weaknesses. We believe auditor reporting on internal control is appropriate and necessary for publicly traded companies and major public entities alike. We also believe that such reporting is appropriate in other cases where management assessment and auditor examination and reporting on the effectiveness of internal control add value and mitigate risk in a cost- beneficial manner. We know that some will point to increased costs as a reason to remove this provision from the legislation. We believe that auditors who follow the Financial Audit Manual—which was jointly developed by GAO and the President’s Council on Integrity and Efficiency (PCIE)—should ordinarily have little to no incremental costs associated with such reporting. We fully support having DHS, as well as all CFO Act agencies, obtain an opinion on its internal control. If DHS is truly committed to becoming a model federal agency, it should begin obtaining opinions on internal control as soon as practical and set an example for other agencies to follow and in keeping with the actions already taken by SSA, GSA, NRC, and GAO. We also support agencies including program performance information in agency performance and accountability reports, so that relevant performance and financial information is presented in a consolidated and useful manner. Agencies currently have the discretion to include this information in a consolidated format. We strongly encourage DHS to consolidate this information in its accountability report beginning with fiscal year 2003. In closing, the American people have increasingly demanded accountability from government and the private sector. The Congress has recognized, through legislation such as the CFO Act, that the federal government must be held to the highest standards. We already know that many of the larger agencies transferred to DHS have a history of poor financial management systems and significant internal control weaknesses. These known weaknesses provide further evidence that DHS’s systems and financial controls should be subject to provisions of the CFO Act and thus FFMIA. We also strongly encourage DHS to become a model agency and, as soon as practical, obtain an opinion on its internal controls and report performance information in its accountability reports. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have at this time. For information about this statement, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-6906, or Casey Keplinger, Assistant Director, at (202) 512-9323. You may also reach them by e-mail at [email protected] or [email protected]. Individuals who made key contributions to this testimony include Cary Chappell and Heather Dunahoo. Fiscal Year 2002 U.S. Government Financial Statements: Sustained Leadership and Oversight Needed for Effective Implementation of Financial Management Reform. GAO-03-572T. Washington, D.C.: April 8, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Federal Emergency Management Agency. GAO-03-113. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Department of the Treasury. GAO-03-109. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Department of Justice. GAO-03-105. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 2003. Financial Management: FFMIA Implementation Necessary to Achieve Accountability. GAO-03-31. Washington, D.C.: October 1, 2002. Homeland Security: Critical Design and Implementation Issues. GAO-02- 957T. Washington, D.C.: July 17, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 2002. Executive Guide: Creating Value Through World-class Financial Management. GAO/AMID-00-134. Washington, D.C.: April 2000.
Based on its budget, the Department of Homeland Security (DHS) is the largest entity in the federal government that is not subject to the Chief Financial Officers (CFO) Act of 1990. The department, with an estimated $39 billion in assets, an almost $40 billion fiscal year 2004 budget request, and more than 170,000 employees, does not have a presidentially appointed CFO subject to Senate confirmation and is not required to comply with the Federal Financial Management Improvement Act (FFMIA) of 1996. In addition, we designated implementation and transformation of DHS as high risk based on three factors: (1) the implementation and transformation of DHS is an enormous undertaking that will take time to achieve in an effective and efficient manner, (2) components to be merged into DHS already face a wide array of existing challenges, and (3) failure to effectively carry out its mission would expose the nation to potentially very serious consequences. In light of these conditions, Congress asked GAO to testify on the financial management challenges facing DHS, steps for establishing sound financial management and business processes at DHS, and GAO's comments on H.R. 2886, The Department of Homeland Security Financial Accountability Act. The Homeland Security Act of 2002 brought together 22 agencies to create a new cabinet-level department focusing on reducing U.S. vulnerability to terrorist attacks, and minimizing damages and assisting in recovery from attacks that do occur. Meeting this mission will require a results-oriented environment with a strong financial management infrastructure. Creating strong financial management at DHS is particularly challenging because most of the entities brought together to form the department have their own financial management systems, processes, and in some cases, deficiencies. Four of the seven major agencies that transferred to DHS reported 18 material weaknesses in internal control for fiscal year 2002 and five of the seven major agencies had financial management systems that were not in substantial compliance with FFMIA. For DHS to develop a strong financial management infrastructure, it will need to address these and many other financial management issues. Through the study of several leading private and public sector finance organizations (Creating Value Through World-class Financial Management, GAO/AIMD-00-134), GAO has identified success factors, practices, and outcomes associated with world-class financial management. Four steps DHS can take to begin developing sound financial management and business processes are to: (1) make financial management an entity-wide priority, (2) redefine the role of the finance organization, (3) provide meaningful information to decision makers; and (4) build a team that delivers results. H.R. 2886 can help facilitate the creation of a first-rate financial management architecture at DHS by providing the necessary tools and setting high expectations. The bill would (1) make DHS a CFO Act agency, (2) require DHS to obtain an opinion on its internal controls, and (3) require DHS to include program performance information in its performance and accountability reports. GAO fully supports the objectives of the CFO Act to provide reliable financial information and improve financial management systems and controls and believes DHS should be included under the act and therefore also subject to FFMIA. Further, GAO strongly believes that auditor reporting on internal control can be a critical component of monitoring the effectiveness and accountability of an organization and supports DHS, as well as other CFO Act agencies, obtaining such opinions. In addition, GAO supports agencies including program performance information in their performance and accountability reports and strongly encourages DHS to report this information voluntarily. Finally, as introduced, H.R. 2886 provided a waiver allowing DHS to forego a financial statement audit for fiscal year 2003. We understand an agreement has been reached to remove this waiver from the proposed legislation. DHS has committed to a 2003 financial statement audit, which is already underway. GAO supports dropping this provision from H.R. 2886.
For the 2020 Census, the Bureau intends to limit its per-household cost to not more than that of the 2010 Census, adjusted for inflation. To achieve this goal, the Bureau is significantly changing how it conducts the census, in part by re-engineering key census-taking methods and infrastructure. The Bureau’s innovations include (1) using the Internet as a self-response option; (2) verifying most housing unit addresses using “in-office” procedures rather than costly field canvassing; (3) in certain instances, replacing enumerator-collected data with administrative records (information already provided to federal and state governments as they administer other programs) and, (4) re-engineering field data collection methods. The Bureau’s various initiatives have the potential to make major contributions toward limiting cost increases. In October 2015, the Bureau estimated that with its new approach it can conduct the 2020 Census for a life-cycle cost of $12.5 billion, $5.2 billion less than if it were to repeat the design and methods of the 2010 Census (both in constant 2020 dollars). Table 1 below shows the Bureau’s estimated cost savings it hopes to achieve in 4 innovation areas. Sufficient testing, while important to the success of any census, is even more critical for the Bureau’s preparations for 2020. To help control costs and maintain accuracy, the 2020 Census design includes new procedures and technology that have not been used extensively in earlier decennials, if at all. While these innovations show promise for a more cost-effective head count, they also introduce new risks. As we have noted in our prior work, it will be important to thoroughly test the operations planned for 2020 to ensure they will (1) produce needed cost savings, (2) function in concert with other census operations, and (3) work at the scale needed for the national head count. The Bureau’s failure to fully test some key operations prior to the 2010 Census was a key factor that led us to designate that decennial as one of our high-risk areas. The 2016 test was the latest major test of NRFU in the Bureau’s testing program. In 2014, the Bureau tested new methods for conducting NRFU in the Maryland and Washington, D.C., area. In 2015, the Bureau assessed NRFU operations in Maricopa County, Arizona. In 2018, the Bureau plans to conduct a final “End-to-End” Test which is essentially a dress rehearsal for the actual decennial. The Bureau needs to finalize the census design by the end of fiscal year 2017 so that key activities can be included in the End-to-End Test. The Bureau plans to conduct additional research and testing through 2018 in order to further refine the design of the 2020 Census but recently decided to alter its approach. On October 18, 2016, the Bureau announced plans to stop two field test operations planned for fiscal year 2017 to mitigate risks from funding uncertainty. The Bureau said it would stop all planned field activity, including local outreach and hiring, at its test sites in Puerto Rico, North and South Dakota, and Washington State. The Bureau will not carry out planned field tests of its mail out strategy and NRFU in Puerto Rico. The Bureau also cancelled plans to update its address list in the Indians lands and surrounding areas in the three states. However, the Bureau said it will continue with other planned testing in fiscal year 2017, such as that focusing on systems readiness and internet response. Further, the Bureau said it would consider incorporating the stopped field activity elements within the 2018 End-to-End Test. The Bureau maintains that stopping the 2017 Field Test will help prioritize readiness for the 2018 End-to-End Test, and mitigate risk. Nevertheless, as we noted in our November 2016 testimony, it represents a lost opportunity to test, refine, and integrate operations and systems, and puts more pressure on the 2018 End-to-End Test to demonstrate that enumeration activities will function as needed for 2020. NRFU generally proceeded according to the Bureau’s operational plans. For example, the Bureau demonstrated procedures for quality assurance and training. On the other hand, according to preliminary 2016 Census Test data, there were 19,721 NRFU cases coded as non-interviews in Harris County, Texas and 14,026 in L.A. County, California, or about 30 and 20 percent of the test workload respectively. According to the Bureau, non-interviews are cases where no data or insufficient data were collected, either because enumerators made six attempted visits without success (the maximum number the Bureau allowed), or visits were not completed due to, for example, language barriers or dangerous situations. In such cases for the 2020 Census, the Bureau may have to impute attributes of the household based on the demographic characteristics of surrounding housing units as well as administrative records. According to Bureau officials, they are not certain why there were so many non-interviews for the 2016 Census Test and are researching potential causes. Bureau officials told us that they expect higher numbers of non-interviews during tests in part, because, compared to the actual enumeration the Bureau conducts less outreach and promotion. While the 2016 Census Test interview rate is not necessarily a precursor to the 2020 non-interview rate, because of its relationship to the cost and quality of the census, it will be important for the Bureau to better understand the factors contributing to it. Bureau officials hypothesized that another contributing factor could be related to NRFU methods used in the 2016 Census Test compared to earlier decennials. For the 2010 and prior censuses, enumerators collected information during NRFU using pencil and paper. Enumerators may have visited a housing unit more than the six maximum allowable visits to obtain an interview but did not record all of their attempts, thus enabling them to achieve a higher completion rate. For the 2020 Census, and as tested in 2016, the Bureau plans to collect data using mobile devices leased from a contractor, and an automated case management system to manage each household visit (see figure 1). The Bureau believes that this approach will provide a faster, more accurate, and more secure means of data collection. Unlike previous censuses and one prior test, enumerators in the 2016 Census Test did not have an assigned set of cases that they alone would work until completion. Instead, the Bureau relied on an enhanced operational control system that was designed to provide daily assignments and street routing of NRFU cases to enumerators in the most optimal and efficient way. At the same time, the mobile device and automated case management system did not allow an enumerator to attempt to visit a housing unit more than once per day, reopen a closed case, or exceed the maximum allowable six attempts. One factor we observed that may have contributed to the non-interview rate was that enumerators did not seem to uniformly understand or follow procedures for completing interviews with proxy respondents (a proxy is someone who is a non-household member, at least 15 years old, and knowledgeable about the NRFU address). According to the 2016 Census Test enumerator training manual, when an eligible respondent at the address cannot be located, the automated case management system on the mobile device will prompt the enumerator when to find a proxy to interview, such as when no one is home or the housing unit appears vacant. In such circumstances, enumerators are to find a neighbor or landlord to interview. However, in the course of our site visits, we observed that enumerators did not always follow these procedures. For example, we observed that one enumerator, when prompted to find a proxy, looked to the left and then right and, finding no one, closed the case. Similarly, another enumerator ignored the prompt to find a proxy and explained that neighbors are usually not responsive or willing to provide information about the neighbor and did not seek to find a proxy. Enumerators we interviewed did not seem to understand the importance of obtaining a successful proxy interview and many appeared to have received little encouragement during training to put in effort to find a proxy. Proxy data for occupied households are important to the success of the census because the alternative is a non-interview. In 2010 about one- fourth of the NRFU interviews for occupied housing units were conducted using proxy data. We shared our observations with Bureau officials who told us that they are aware that enumerator training for proxies needs to be revised to convey the importance of collecting proxy data when necessary. Converting non-interviews by collecting respondent or proxy data can improve interview completion rates, and ultimately the quality of census data. The Bureau told us it will continue to refine procedures for 2020. According to the Bureau, its plans to automate the assignment of NRFU cases have the potential to deliver significant efficiency gains. At the same time, improving certain enumeration procedures and communicating better could produce additional efficiencies by enabling the Bureau to be more responsive to situations enumerators encounter in the course of their follow-up work. Enumerators were unable to access recently closed incomplete cases. Under current procedures, if an enumerator is unable to make contact with a household member, the case management system closes that case to be reattempted at a later date, perhaps by a different enumerator; assuming fewer than six attempts have been made. Decisions on when re-attempts will be made—and by whom—are automated and not designed to be responsive to the immediate circumstances on the ground. This is in contrast to earlier decennials when enumerators, using paper-based data collection procedures, had discretion and control over when to re-attempt cases in the area where they were working. According to the Bureau, leaving cases open for re- attempts can undermine the efficiency gains of automation when enumerators depart significantly from their optimized route, circling back needlessly to previously attempted cases rather than progressing through their scheduled workload. During our test site observations, however, we found how this approach could lead to inefficiencies in certain circumstances. For example, we observed enumerators start their NRFU visits in the early afternoon as scheduled, when many people are out working or are otherwise away. If no one answered the door, those cases were closed for the day and reassigned later. However, if a household member returned while the enumerator was still around, the enumerator could not reopen the case and attempt an interview. We saw this happen at both test site locations, typically in apartment buildings or at apartment-style gated communities, where enumerators had clear visibility of a large number of housing units and could easily see people arriving home. Bureau officials acknowledged that closing cases in this fashion represented a missed opportunity and plan to test greater flexibilities as part of the 2018 End-to-End Test. Programming some flexibility into the mobile device—if accompanied with adequate training on how and when to use it—should permit some interviews to be completed without having to deploy staff to the same case on subsequent days. This in turn could reduce the cost of follow-up attempts and improve interview completion rates. Enumerators did not understand procedures for visits to property managers. Property managers are a key source of information on non- respondents when enumerators cannot find people at home. They can also facilitate access to locked buildings. Further, developing a rapport with property managers has helped the NRFU process, such as when repeated access to a secured building or residential complex is needed on subsequent days by different enumerators. In response to problems observed during the Bureau’s 2014 and 2015 Census tests and to complaints from property managers about multiple uncoordinated visits by enumerators, the Bureau’s 2016 Census Test introduced specific procedures to conduct initial visits to property managers in large multi-unit apartment buildings. The procedures sought to identify up front which, if any, units needing follow-up were vacant, eliminating the need for enumerators to collect this information from property managers with subsequent visits on a case-by-case basis. According to Bureau officials, the automated case management system was designed to allow for an enumerator to make up to three visits to property managers to remove vacant units. According to the Bureau, the 2016 Census Test demonstrated that vacant units could quickly be removed from the NRFU workload using these procedures in cases where a property manager was readily available; however, in other cases the procedures caused confusion. For example, whenever an initial visit was unsuccessful, all of the cases at that location—up until then collated into only one summary row of the enumerator’s on-screen case list—would suddenly expand and appear as individual cases to be worked, sometimes adding several screens and dozens of cases to the length of the list, which the enumerators we spoke with found confusing. Furthermore, without the knowledge of which units were vacant, enumerators may have unnecessarily visited these units and increased the cost and the time required to complete NRFU. During debriefing sessions the Bureau held, enumerators and their supervisors identified training in these procedures as an area they felt needed greater attention in the future. Bureau officials said that they are pleased that the test demonstrated their progress in automating case management at multi-unit locations, but at the same time, they recognize the need to better refine the initial property manager contact procedures and integrate multi-unit procedures into the training. During our field visits, we encountered several instances where enumerators had been told by a respondent or otherwise learned that returning at a specific time on a later date would improve their chance of obtaining an interview from either a household respondent or a property manager. According to the Bureau, while there was a mechanism for capturing and using this information, it was not uniformly available to the enumerators, nor did the enumerators always use the mechanism when appropriate. As a result, the Bureau’s 2016 Census Test and automated case management system did not have an efficient way to leverage that information. Attempting to contact non-responding households at times respondents are expected to be available can increase the completion rate and reduce the need to return at a later date or rely on proxy interviews as a source of information. The Bureau’s automated case management system used estimated hour- by-hour probabilities for the best time to contact people when making enumerator assignments. The estimation relied on various administrative records, information from other Bureau surveys that had successful contacts in the past, as well as area characteristics. The 2016 Census Test did not have a way to change or update these estimates when cases were subsequently reassigned. The assigned time windows were intended to result in more productive visits and reduce costs. When enumerators identified potentially better times to attempt a contact, they were instructed to key this information into their mobile devices. For example, one enumerator keyed in a mother’s request to come back Thursday afternoon when her kids were in camp, while others keyed in information like office hours and telephone contact numbers obtained from signs on the property they had seen for property managers. However, according to the Bureau this updated information went unused, and we met enumerators who had been assigned to enumerate addresses at the same unproductive time after they had written notes documenting other better times to visit. Another enumerator reported visiting a property manager who complained that the enumerator was not honoring the manager’s earlier request made during a prior enumeration attempt that an enumerator return during a specified time window. Such repeat visits can waste enumerator time (and miles driven), and contribute to respondent burden or reduced data quality when respondents become annoyed and may become less cooperative. We discussed our preliminary observation with Bureau managers at the test sites, who expressed frustration that the automated case management system did not allow them to use the locally-obtained data on when to contact people whom they found in enumerator notes in a way to affect future case assignment. Headquarters staff told us that while they have not fully evaluated this yet, they are concerned that providing local managers with too much flexibility to override the results of optimized case and time assignments would undermine the efficiency gains achievable by the automation. They also explained that enumerators were provided the capability to record what day or what time of day for follow-up. This information could have been used by the automated case management system to better target the timing of future assignments. However, they acknowledged that this procedure may not have been explained during enumerator training. Reviewing the enumerator training manual, we confirmed that there were no procedures to allow enumerators to systematically record what day or what time of day to follow-up at a housing unit. Bureau officials have said that this is another area they are looking into and plan to address. The key innovations the Bureau plans for 2020 show promise for controlling costs and maintaining accuracy, although there are significant risks involved. The Bureau is aware of these risks, and robust testing can help manage them by assessing the feasibility of key activities, their capacity to deliver desired outcomes, and their ability to work in concert with one another under operational conditions. Going forward, to help ensure a cost-effective enumeration, it will be important for the Bureau to improve its NRFU procedures by addressing the challenges identified during the 2016 Test, updating related training materials as needed, and completing these efforts in time to be included in the Bureau’s End-to-End Test scheduled for 2018. The challenges we observed include (1) reducing high non-interview rates, (2) difficulty accessing recently closed, incomplete cases, (3) the need to improve coordination with managers of multi-unit properties, and (4) the need to better leverage operational information collected by enumerators. Resolving these issues should help the Bureau improve its ability to collect quality data and reduce the cost of unnecessary follow-up visits during NRFU. We recommend that the Secretary of Commerce and Under Secretary for Economic Affairs direct the Census Bureau to take the following actions: 1. Determine the cause(s) for non-interviews experienced during the non-response follow-up operation and revise and test what, if any, changes need to be made to operational procedures, training, or both, including making contact with proxy respondents. 2. Revise and test operational procedures for accessing incomplete closed cases and revise and test training material to reflect when this flexibility to access incomplete closed cases should be used by the enumerator. 3. Revise and test operational procedures and relevant training materials for initial property manager visits to ensure procedures and training material are communicated to and understood by enumerators and their supervisors. 4. Revise and test procedures on how to better leverage enumerator- collected information on the best time or day to conduct interviews, and ensure enumerators are properly trained on these procedures. We provided a draft of this report to the Secretary of the Department of Commerce for comment. In its written comments, reproduced in appendix I, the Department of Commerce agreed with our findings and recommendations. The Census Bureau also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the Secretary of Commerce, the Counselor to the Secretary with Delegated Duties of the Undersecretary of Commerce for Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. The report also will be available at no charge on our website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report are listed in appendix II. Robert Goldenkoff, (202) 512-2757 or [email protected]. In addition to the contact named above, Lisa Pearson, Assistant Director; Mark Abraham, Shea Bader, Richard Hung, Donna Miller, Ty Mitchell, Cynthia Saunders; A.J. Stephens, and Timothy Wexler made significant contributions to this report.
With a life-cycle cost of about $12.3 billion, the 2010 Census was the most expensive enumeration in U.S. history. To help control costs and maintain accuracy, the 2020 Census design includes new procedures and technology that have not been used extensively in earlier decennials, if at all. While these innovations show promise for a more cost-effective head count, they also introduce risks. As a result, it will be important to thoroughly test the operations planned for 2020. The objective of this report is to assess key NRFU operations performed during the 2016 Census Test to identify any lessons learned that could have a potential impact on pending design decisions for the 2020 Census. To assess NRFU operations GAO visited both test locations and observed enumerators conducting NRFU interviews, and reviewed relevant documents including the test plan and enumerator training manuals. The Census Bureau (Bureau) recently completed its 2016 Census Test in Los Angeles County, California, and Harris County, Texas. One primary focus of the test was to assess the methodology for non-response follow-up (NRFU), where enumerators personally visit households that do not self-respond to the census. GAO found that during the 2016 Census Test, NRFU generally proceeded according to the Bureau's operational plan. However, data at both test sites indicate that the Bureau experienced a large number of non-interviews. Non-interviews are cases where either no data or insufficient data are collected. Bureau officials are not certain why there were so many non-interviews for the 2016 Census Test and are researching potential causes. Going forward, it will be important for the Bureau to better understand the factors that contributed to the non-interview rate because of its relationship to the cost and quality of the census. GAO also found that refining certain enumeration procedures and training enumerators better could produce additional efficiencies by enabling the Bureau to be more responsive to situations enumerators encounter on the ground. For example, enumerators, by design, were unable to access on the mobile device recently closed, incomplete cases. Bureau officials acknowledged that closing cases in this fashion represented a missed opportunity and plan to test greater flexibilities as part of the 2018 End-to-End Test. Programming some flexibility into the mobile device—if accompanied with adequate training on how and when to use it—should permit enumerators to complete some interviews and reduce the cost of follow-up attempts. Further, enumerators did not always understand procedures for visiting property managers in multi-unit buildings. Specifically, the 2016 Census Test demonstrated that vacant units could quickly be removed from the NRFU workload where a property manager was readily available to provide that information; however, in other cases the procedures confused enumerators and they did not understand how to proceed. Without the knowledge of which units were vacant, enumerators may have unnecessarily visited some vacant units and thereby increased the cost of NRFU. During GAO's field visits, GAO encountered several instances where enumerators learned that returning at a specific time on a later date would improve their chance of obtaining an interview from either a household respondent or a property manager. However, the Bureau's 2016 Census Test and automated case management system did not have an efficient way to leverage that information. Attempting contact at non-responding households at times respondents are expected to be available increases the completion rate and reduces the need to return. GAO recommends the Secretary of Commerce direct the Bureau to: (1) determine causes for non-interviews, and revise and test what, if any, changes need to be made to operational procedures and training; (2) revise and test procedures and training on accessing closed cases, (3) revise and test procedures and training for initial property manager visits; and (4) revise and test procedures and training for how to use enumerator-collected data on the best time or day to conduct an interview. The Department of Commerce agreed with GAO's recommendations, and the Bureau provided technical comments that were incorporated, as appropriate.
NARA’s mission is to safeguard and preserve the records of the U.S. government, ensuring that the people can discover, use, and learn from this documentary heritage. In this way, NARA is to ensure continuing access to the essential documentation of the rights of American citizens and the actions of their government. In carrying out this mission, NARA (among other things) is to provide guidance and assistance to federal officials on the management of records; determine the retention and disposition of records; store agency records in federal records centers from which agencies can retrieve them; receive, preserve, and make available permanently valuable federal and presidential records in archives; and centrally file and publish federal laws and administrative regulations, the President’s official orders, and the structure, functions, and activities of federal agencies through the daily Federal Register. Table 1 summarizes NARA’s organizations, their missions, and the levels of staff in each (expressed as full-time equivalent—FTE). NARA’s Agency Services group includes the Federal Records Centers Program, with approximately 1,100 FTE. The placement of this program within the larger NARA organization is depicted in figure 1. In carrying out its responsibilities to store and archive federal records under the Federal Records Act and its implementing regulations, the Federal Records Centers Program provides storage facilities for federal agencies. Specifically, chapters 21, 29, and 31 of title 44 of the United States Code, and Parts 1232 and 1234 of title 36 of the Code of Federal Regulations authorize NARA to establish, maintain, and operate records centers for federal agencies. Further, 36 C.F.R. Part 1234 Subparts B, C, and D describe facility standards related to quality, effectiveness, durability, and safety; the handling of deviations from NARA’s facility standards; and facility approval and inspection requirements. These standards are applicable to all records storage facilities that federal agencies use to store, service, and dispose of records. To carry out these responsibilities, NARA developed an internal policy directive that outlines the procedures its officials should use to ensure the compliance of records storage facilities. 36 C.F.R. Part 1234 also includes provisions allowing NARA to grant waivers from meeting the standards set forth in the regulations for records storage facilities. In these instances, waivers are allowed when the storage systems, methods, or devices are demonstrated to have facility standards that are equivalent or superior to 36 C.F.R. Part 1234 standards for quality, strength, fire resistance, effectiveness, durability, and safety, among other things. Underground facilities may obtain waivers from regulatory requirements that pertain to the roofs of aboveground facilities. Agencies can request a waiver by providing: a statement identifying the 36 C.F.R. Part 1234 provision for which the waiver is requested, in addition to a description of the proposed alternative, and an explanation of how it is equivalent or superior to the NARA requirement; and supporting documentation demonstrating that the alternative does not provide less protection for federal records than what is required by the 36 C.F.R. Part 1234 standard, which may include certifications from a licensed fire protection engineer or a structural or civil engineer, as appropriate; reports of independent testing; reports of computer modeling; and/or other relevant information. According to 36 C.F.R. Part 1234, NARA is to review the waiver request and supporting documentation, and in some circumstances, consult with the appropriate industry body or qualified experts, such as a fire- suppression specialist, before making a determination. If NARA is in agreement with the proposed waiver and the supporting documentation, it is to grant the waiver and notify the requesting agency. However, if NARA evaluates the waiver request and the supporting documentation unfavorably, it is not to approve the waiver. The Federal Records Centers Program is financed through a revolving fund, which in fiscal year 2012 earned revenue totaling approximately $185 million. Revenues for the fund are generated from the fees that NARA charges federal agencies for storing, servicing, and ultimately disposing of temporary federal records on their behalf, based on a standard fee schedule. NARA develops the fees annually for the upcoming fiscal year. In November 2011, a presidential memorandum on managing government records was issued to the heads of executive departments and agencies. The purpose of the memorandum was to begin an executive branch-wide effort to reform records management policies and practices and to develop a 21st-century framework for the management of government records. Specifically, the memorandum stated, among other things, that all agencies were required to designate a Senior Agency Official to oversee a review of their records management program. The Senior Agency Official would be responsible for coordinating with the Agency Records Officer and appropriate agency officials to ensure the agency’s compliance with records management statues and regulations. In January 2012 and March 2012, NARA’s Inspector General reported on one of NARA’s federal records centers, the Washington National Records For example, the Center, and found that it had numerous weaknesses.Inspector General reported that formalized procedures were not in place to properly track and resolve problems with records received, stored, or removed from the center; documented procedures did not exist for many of the center’s operations; and periodic inventories of the records held at the center were not conducted. In order to address the weaknesses cited above, the Inspector General made recommendations, which included developing a problem resolution process and mechanism for tracking all problems at the center until they are resolved, ensuring a formal tracking mechanism is implemented for new records received, and ensuring a systematic and repeatable process is in place to perform periodic inventories of the records held at the Washington National Records Center. NARA concurred with these recommendations and began taking actions to address them. Specifically, the Archivist ordered all federal records centers operated by NARA to assess their operations during a 1-day stand down. In addition, NARA officials stated that they established a Washington National Records Center oversight group to ensure that the center leadership participated in plans, actions, and results related to resolving record storage issues. However, as of May 2013, NARA was in the process of addressing the recommendations. Federal agencies are to store records in three types of facilities: federal records centers that are managed by NARA, agency records centers, and commercial records storage facilities. Each of these types of facilities is authorized by 36 C.F.R. Part 1234, which also requires agencies to notify NARA when they use agency records centers or commercial facilities to store federal records. While NARA is aware of the extent to which agencies use the federal records centers that it manages, its awareness of the extent to which agencies’ use their own and commercial records storage facilities is incomplete. As of May 2013, NARA manages 18 federal records centers located across the United States. These centers consist of a total of 24 facilities where records are actually stored. Each facility includes storage areas, which NARA refers to as bays. (According to NARA, the typical bay is approximately the size of a football field.) Collectively, the facilities provide a total of 162 bays that are used by approximately 200 entities. Table 2 provides a listing of NARA’s federal records centers and their related facilities, and the number of bays at each facility. In addition to the federal records centers that NARA operates, agencies also are authorized to establish and operate their own centers for storing records. As of May 2013, NARA had identified 18 records centers that were being operated by six federal agencies or offices: the Department of Energy, the Department of Veterans Affairs, the Federal Bureau of Investigation, the National Geospatial-Intelligence Agency, the National Reconnaissance Office, and the Transportation Security Administration’s Office of Law Enforcement – Federal Marshal Records Center. These agencies varied in the number of storage facilities that they operated— ranging from 7 at the Federal Bureau of Investigation to 1 facility at each of three other agencies (the Department of Veterans Affairs, the National Reconnaissance Office, and the Transportation Security Administration’s Office of Law Enforcement). Table 3 identifies the number of records storage facilities operated by each of the agencies. Federal agencies are also authorized to use private sector commercial facilities for records storage, retrieval, and disposition. As of May 2013, agencies reported to NARA that 22 such facilities, operated by 12 vendors, are under contract with and provide storage services for 11 federal agencies or entities. These federal agencies or offices are the Bureau of Public Debt, Centers for Medicare and Medicaid Services, Commodities Futures Trading Commission, Department of Veterans Affairs, Environmental Protection Agency, Federal Aviation Administration, Federal Energy Regulatory Commission, Federal Public Defender, Naval Sea Systems Command, United States Customs and Border Protection, and the United States International Trade Commission. Table 4 identifies each vendor and their facilities that provide records storage services to federal agencies. To determine whether all agencies were storing their records in one of the three types of allowable facilities, NARA collected data and compiled a database of agencies and the records storage facilities that they use. Specifically, in 2008, NARA officials sent letters to agencies’ records managers that asked them to provide a list of all records storage facilities used. Subsequently, NARA sought to obtain information about where agencies were storing their records by sending follow-up letters and by including a question regarding the storage of federal records in a voluntary annual survey of agencies’ records management practices. However, the database was unreliable because it did not include complete, current, and valid data. Specifically, NARA’s database of agencies’ records storage facilities included a reporting status for about 260 agencies, but did not have a date associated for when 47 of these agencies reported. Additionally, the data were derived primarily from information agencies submitted to NARA in 2008 and 2009, thereby rendering it outdated. Also, the self-reported nature of agencies’ data raised questions about the validity of the data they provided. NARA officials responsible for determining where agencies store records acknowledged that the data about agencies’ and the records storage facilities they use are incomplete, outdated, and of questionable validity. The officials attributed this situation to agencies’ not reporting data to NARA because they were unfamiliar with the 36 C.F.R. Part 1234 requirement to notify NARA when they use agency records centers or commercial facilities to store federal records, as well as NARA having insufficient staff to ensure that all agencies report the required data, keep the data current, and verify the data agencies provide. NARA officials responsible for communicating records storage requirements to agencies stated that the Senior Agency Officials for records could provide NARA with points of contact that can help identify all the facilities where agencies store their records. Nevertheless, until NARA ensures that it has complete, current, and valid data on agencies’ records storage facilities, it cannot be certain that agencies are using one of the three types of authorized facilities. In carrying out its responsibilities to store and archive federal records, Title 44 of the United States Code authorizes NARA to establish, maintain, and operate records centers for federal agencies; approve agency records centers; and promulgate standards, procedures, and guidelines to federal agencies with respect to the storage of their records in commercial records storage facilities. Regulations implementing the statute, at 36 C.F.R. Part 1234, specify the minimum structural, environmental, property, and life-safety standards that a records storage facility must meet when the facility is used for the storage of federal records. For example, facilities must be designed in accordance with the applicable national, regional, state, or local building codes to provide protection from building collapse or failure of essential equipment. Further, a floor load limit must be established for the records storage area by a licensed structural engineer, and the facility must be 5 feet above and 100 feet from any 100-year flood plain areas, or be protected by an appropriate flood wall that conforms to local or regional building codes. In addition, NARA’s Review of Records Storage Facilities policy directive outlines the procedures for NARA to ensure records centers comply with 36 C.F.R. Part 1234 specifications. Specifically, the directive requires NARA to conduct inspections of its federal records centers and agencies’ records centers to validate those facilities as compliant. In addition, 36 C.F.R. Part 1234 requires that agencies ensure that their own or NARA officials have the right to inspect commercial records storage facilities for compliance with the facility requirements. If a commercial facility fails an inspection, federal agencies that store records at the facility are required to bring the facility into compliance with the standards within 6 months or to transfer their documents to a compliant facility within 18 months. Standard practices in program management call for documenting the scope of a project as well as milestones and time frames for timely completion and implementation of repairs or transfers to ensure results are achieved. NARA conducted inspections of 23 of its 24 federal records center facilities from February 2005 through January 2013 and determined that 20 of the facilities were compliant with 36 C.F.R. Part 1234. It also determined that 2 facilities were partially compliant because they included at least 1 storage bay that did not satisfy the regulation. Specifically, NARA found that 2 of the 16 bays at the Lenexa, Kansas facility and 6 of the 17 bays at the Lee’s Summit, Missouri facility were noncompliant because they included shelves that were determined to be too weak to meet the load requirements for records storage shelving and racking systems. Further, it found that all 7 bays at the San Francisco, California records center were noncompliant because, contrary to the regulation, there were pipes (other than sprinkler pipes) that ran through the records storage areas which lacked supplemental protective measures such as The remaining facility consisting of 1 bay at the Anchorage, drip pans.Alaska center was not inspected; however, NARA had considered the facility to be noncompliant and had planned to relocate the records being stored there. Table 5 summarizes the compliance status of each federal records center facility. As of July 2013, NARA indicated that it had plans to address the deficiencies at the noncompliant federal records centers, although it had not established schedules for doing so at the San Francisco and Anchorage facilities. For example, to correct the shelving at the Lenexa and Lee’s Summit facilities, NARA had plans to contract for a detailed inspection of the existing shelving, prepare a report identifying necessary repairs, and then conduct the repairs and/or replacement of the noncompliant shelves. It expected to award a contract for this work in August 2013 and to complete the work within the following 6 months. In addition, NARA officials responsible for facility compliance had developed a plan for corrective actions at the San Francisco facility. This plan calls for the installation of water sensing cables and protective drip pans and guttering to provide supplemental protection of pipes that run through records storage areas. However, the plan does not include a schedule for completing these tasks consistent with standard practices for program management. NARA officials responsible for facility compliance attributed the lack of a schedule to uncertainty about the availability of funding and personnel resources to execute the plan. Further, NARA facility managers developed plans to replace the existing Anchorage, Alaska facility with a newly constructed facility. However, NARA did not have a schedule for completing the construction because it had not secured funding to construct the new facility. While NARA has stated that it plans to bring all of its federal records center facilities into compliance with applicable regulations, the agency has not established a schedule for doing so at all facilities. Thus, although NARA has determined that the vast majority of the space (i.e., bays) in which its facilities store records is fully compliant with applicable standards, NARA has not established a basis for tracking and reporting progress toward resolving deficiencies at all of its facilities that do not yet fully meet the standards. Agencies must obtain approval from NARA to store federal records at their own or a commercial records storage facility and, to do so, must provide documentation to show that the facility satisfies the requirements After a facility is approved, agencies are able to of 36 C.F.R. Part 1234.store federal records at the facility and an inspection may be conducted to ensure that the facility meets the requirements of the standard. According to NARA officials responsible for determining facility compliance, inspections have been an important means of determining whether facilities are in fact compliant with the requirements. NARA has approved 10 of the 18 agency facilities that agencies have reported using. According to NARA officials, the remaining 8 centers were not approved because the agencies that operate them did not provide NARA with sufficient documentation to support approval. NARA has approved all 22 identified commercial facilities. However, of the 10 approved agency records centers, 1 had been inspected; and of the 22 approved commercial facilities, 13 had been inspected (1 inspection was deemed unfavorable and the facility was removed from the approved list). For the 9 agency records centers and 10 commercial facilities that had not been inspected, NARA provided a schedule for doing so. According to this schedule, NARA plans to inspect 4 facilities per fiscal year from fiscal years 2014 through 2017, with the remaining 3 facilities scheduled for inspection in fiscal year 2018. For the commercial facilities, NARA had scheduled all 10 of the remaining facilities, with the last of these inspections planned in fiscal year 2017. Until all facilities are inspected, NARA cannot be reasonably assured that agencies are storing federal records in facilities that comply with standards, thus increasing the risk that these records will be damaged. In keeping with NARA’s mission to safeguard and preserve the records of the U.S. government, the agency has a process in place to handle incidents in which records could potentially become damaged at its federal records centers. In particular, NARA requires its federal records centers to follow the Emergency First Response for NARA Records checklist to facilitate the protection of federal records from further impact and/or permanent damage when an incident occurs. As part of the agency’s 1561 directive, the checklist requires (1) notification and immediate actions, such as notifying management; (2) an initial response, including steps to take if water damage occurs; and (3) damaged records response operations, including the requirement to document NARA’s immediate response to incidents in an after-action report and a general requirement to provide a report after completing follow-up activities. Additionally, internal control standards specify, among other things, the need for significant events to be clearly documented. In addition to the checklist requirements, NARA’s Chief Operating Officer told us about specific steps NARA is to take when boxes of records get wet. For example, based on the volume of records that are involved and available resources, boxes are to be air dried and stored in an onsite freezer or in freezer trucks to minimize the growth of mold and prevent or reduce potential damage to records. Boxes of records are then to be individually removed, treated, and dried, or sent to a contractor that can freeze dry various types of records. NARA is also to use in-house restoration services, such as industrial fans, for incidents that are considered minor. For major incidents (where affected records are not expected to be available to the agency that owns them for more than 48 hours), NARA’s process indicates that it will work with a contractor for drying services. NARA generally followed its process to prevent damage to records when incidents occurred. Documentation that we reviewed for 55 incidents that NARA reported as occurring from January 2009 through March 2013 indicated that the agency had taken steps consistent with its Emergency First Response for NARA Records checklist. For example, NARA provided documentation of steps taken to handle incidents at the Washington National Records Center and at the National Personnel Records Center in Valmeyer, Illinois from March 2011 through August 2012. Specifically, at the Washington National Records Center: A roof leak incident in March 2011 impacted 47 cubic feet of records stored at the center. According to NARA’s documentation, 2 cubic feet of records were placed on drying racks and dried, 3 cubic feet of records were reboxed, and the remaining records were air dried in their original boxes. During another roof leak at the center in May 2011, a large number of boxes of records became wet. NARA staff noted the locations of the leaks, notified management, and took steps to address the incident. The staff initiated triage efforts to relocate the records to another area to determine how the incident had affected the records. While some records were air dried, those that were substantially wet were placed in a freezer truck. After the records were held in the freezer truck for several days, NARA reassessed them, and removed and reboxed records that had dried. The remaining 252 cubic feet of wet records were freeze dried at an offsite facility. NARA documented the actions it took to address the wet records and the center director notified the affected agencies. A roof leak that occurred at the center in June 2011 affected 7 cubic feet of records. NARA documented the actions it took, noting that 5 cubic feet of records were reboxed, and the remaining records were air dried. Another roof leak later that month resulted in a large number of boxes of records becoming wet. NARA staff noted the locations in which the leaks occurred, notified management, and took actions to address the records involved in the incident. The staff initiated triage efforts to relocate the records to another area and determine the level of severity for the affected records. Records that could be dried with minimal effort were removed from boxes and placed on pallets to begin the air-drying process onsite. Records that were found to be substantially wet were placed in a freezer truck. After the records were held in the freezer truck for several days, NARA reassessed them and determined that some of the records had dried. While the dry records were removed from the freezer truck and reboxed, 414 cubic feet of records were freeze dried at an offsite facility. NARA documented the actions it took to address the wet records and the center director notified the affected agencies. In addition, at the National Personnel Records Center (Valmeyer): A sprinkler leak in August 2012 affected 27 cubic feet of records. Five of the 27 cubic feet of records were determined to not be wet and 18 cubic feet of records were removed from the location and dried. The remaining 4 cubic feet of records were reboxed. While NARA has taken steps to minimize damage to records, the agency has not tracked the results of its efforts in all cases. For example, of the 55 incidents, NARA provided documentation that verified that the actions it took in responding to 46 incidents resulted in no permanent damage to records. For the remaining 9 incidents, officials stated that NARA’s actions prevented permanent damage to records; however, the agency could not provide documentation that would allow us to verify this assertion. For example, NARA could not provide documentation that described the results of its efforts to prevent permanent damage to 6 cubic feet of records that became wet due to faulty floor and roof drains at the Chicago Federal Records Center in June 2011. A contributing factor is that while the NARA 1561 checklist provides generally defined requirements for final reporting, it does not require the federal records centers to document the results of the actions they have taken to prevent permanent damage to records that were at risk. As a result, NARA is not positioned to fully report on the effectiveness and outcome of its actions to minimize damage to records and does not have an institutional record that a third party can use to validate the results of its efforts. The Treasury and General Government Appropriations Act, 2000, established a Records Centers Revolving Fund to pay for expenses and equipment necessary to provide storage and related services for federal records. Accordingly, the Federal Records Centers Program and NARA’s Office of the Chief Financial Officer are responsible for annually developing the fees charged to agencies for records storage and related services. These fees are to be developed for the upcoming fiscal year using the current fiscal year fee schedule, expense projections, and workload projections for NARA’s records centers. In determining the fees, it is to consider costs associated with full operation of the records storage facilities, taking into consideration expenses, such as reserves for accrued annual leave, worker’s compensation, depreciation of capitalized equipment and shelving, and amortization of IT software and systems. Annually, all federal records centers are required to submit expense and workload projections to the Federal Records Centers Program headquarters operation. The expense and workload projections are used to develop budget and revenue projections, which are then used as the basis to develop rates for the upcoming fiscal year. Factors such as inflation, customer impact, the frequency of rate change, and competitiveness with the private sector are then considered when developing new rates. The fees developed for the upcoming fiscal year are approved by the Director of the Federal Records Centers Program, Executive for Agency Services, Chief Financial Officer, and Chief Operating Officer before receiving final approval from the Archivist. According to NARA officials responsible for managing the Federal Records Centers Program, the newly developed fees are then used at all federal records centers for the upcoming fiscal year. Storage fees charged by NARA in fiscal year 2013 were comparable to fees charged by commercial vendors on the GSA schedule in that same time frame. Specifically, of the 12 commercial vendors that provided storage services for 11 federal agencies, 5 had price lists that were posted on GSA’s Federal Acquisition Service webpage. Table 6 provides a comparison of storage fees for NARA and these 5 commercial vendors for fiscal year 2013. As shown in the table, NARA’s fee of $0.23 per cubic foot was consistent regardless of the storage quantity. Specifically, NARA’s fee was higher than fees charged by vendors 1 and 2, although its fees were lower than those of vendors 3 and 5. In addition, NARA’s fees were lower than those of vendor 4 if storing less than 100,000 cubic feet and higher if storing 100,000 or more cubic feet. NARA also did not charge additional fees that certain vendors charged. Specifically, vendors 1, 3, and 4 applied a $65, $25, and $100 fee, respectively, to a customer’s account when the storage charges did not meet the customer’s contractual minimum storage requirement. In addition, vendor 4 charged an administration fee of $25.12 or $62.80 per account, respectively, for summary or detailed billing. Although federal regulations call for records to be stored in one of three types of facilities—NARA-operated federal records centers, agency records centers, or commercial records storage facilities—the extent to which agency and commercial facilities are used to store records is uncertain because NARA does not know where all agencies store their records. NARA’s efforts to collect data from agencies about the facilities they use to store records have yielded data that are incomplete, outdated, and of questionable validity. NARA has determined that most of its federal records center facilities are fully compliant with the standards established in regulations, but that four facilities are partially or entirely noncompliant—a situation that increases the risk of damage to the records stored in the facilities. Although it has plans for bringing these four facilities into full compliance with the regulations, NARA has not established dates for completing its plans at two of the facilities. As a result, NARA does not have a basis for determining progress toward correcting deficiencies in those facilities that do not fully meet the standards Additionally, although NARA has taken steps to prevent permanent damage to records in their facilities on a total of 55 occasions over a recent 4-year time period, the federal records centers did not always keep track of the results of their efforts and were unable to provide documentation confirming they were successful in 9 cases. Therefore, NARA is not positioned to fully report on the effectiveness of its actions to minimize permanent damage to federal records. To assist NARA in its responsibility to ensure that federal records are stored in compliant facilities, we recommend that the Archivist of the United States direct the Chief Operating Officer to take the following three actions: Place increased priority on the collection of complete, current, and valid information from agencies about their use of agency and commercial records storage facilities. Develop a schedule for executing plans to resolve issues at each of the federal records centers that is not fully compliant with 36 C.F.R Part 1234. Clarify NARA’s checklist for handling incidents that may involve permanent damage to records by including a requirement to document the results of the steps taken to minimize permanent damage to records. NARA provided written comments on a draft of this report, which are reprinted in app. II. In its comments, the agency concurred with all three of our recommendations for executive action regarding facility inspections and other areas related to safe storage of federal records. In addition, we received technical comments via email from NARA, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Archivist of the United States; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) determine the types of facilities agencies use to store federal records and the extent to which NARA’s data on agencies’ use of storage facilities are complete, (2) evaluate the extent to which NARA has determined these facilities to be compliant with standards in 36 C.F.R. Part 1234, (3) determine what actions NARA has taken to minimize damage to records in federal records centers and the extent to which it documents such efforts, and (4) determine how NARA determines storage fees and whether fees differ among facilities. To accomplish the first objective, we reviewed 36 C.F.R. Part 1234 and developed a thorough understanding of the regulation through discussions with NARA officials who are responsible for administering it. We then obtained lists of NARA, agency, and commercial records storage facilities from NARA. These lists included NARA’s central registry of approved facilities. We corroborated the lists by comparing them with other documentation such as facility approval memoranda and inspection schedules, as well as through interviews with agency officials. Additionally, we obtained NARA’s database of agencies’ records storage facilities and discussed NARA’s methods for populating the database with responsible NARA officials. We determined the database to be unreliable because it was incomplete, outdated, and largely reliant on self-reported data from agencies. For the second objective, we obtained and reviewed memoranda from NARA that indicated approval of NARA, agency, and commercial records storage facilities and the facilities’ compliance with 36 C.F.R. Part 1234. We then used additional documentation, including detailed facility inspection checklists, fire inspection reports, and structural engineering reports to determine the existence of support for NARA’s approval determinations. We also discussed NARA’s method for approving and inspecting facilities, in addition to plans for conducting future facility inspections with the officials who are responsible for performing the inspections. To accomplish the third objective, we reviewed NARA policies and procedures for the storage and management of federal records and compared them with applicable internal control standards. We also reviewed procedures for handling records damage in NARA records centers and documentation relative to records emergency planning and training. We collected and analyzed documentation on 55 incidents that occurred at NARA records centers from January 2009 through March 2013, including reports that described NARA’s actions to mitigate or reduce records damage. We also compared requirements in NARA’s 1561 checklist to the documentation described above. Further, we interviewed NARA officials to determine the actions taken to minimize records damage in federal records centers and corroborated the officials’ statements with aforementioned documentation. To accomplish the fourth objective, we obtained and analyzed documentation from the NARA Federal Records Centers Program and General Services Administration (GSA) schedules that identified and discussed records storage fees and then compared fees among records storage facilities. To determine the reliability of the data provided from NARA, we performed basic steps to ensure the data provided were valid, and reviewed relevant information describing the data. We reviewed documentation related to the data sources, including NARA’s fiscal year 2013 fee schedule, fee determination process description documents, and workload and expense projections. Although we could not independently verify the reliability of all this information, we compared the data with other available supporting documents to determine data consistency and reasonableness. We also obtained price lists from GSA’s website for commercial vendors that listed facilities that are compliant with 36 C.F.R. Part 1234. We did not determine whether individual agencies had negotiated lower prices than those listed in the price lists. We compared storage fees for NARA and commercial vendors by extracting fee data from NARA’s fee schedule and commercial vendor price lists. For our comparison, we reviewed the publicly available price lists for five commercial vendors (referred to as vendors 1-5 in our analysis). Four of the five vendors’ price lists charged storage fees based on cubic feet of storage per month and the fifth vendor charged based on the number of boxes stored. In order to directly compare fees established by NARA and the five vendors, we converted boxes to cubic feet for vendor 5. Storage fees were then arranged in order from lowest to highest. We supplemented our analyses with interviews of NARA officials who are knowledgeable about the Federal Records Centers Program, including NARA’s Chief Operating Officer, the program director, and assistant director. We also interviewed representatives of private sector record storage companies that were relevant to our study. We conducted this performance audit from November 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, the following staff also made key contributions to this report: Mark Bird, Assistant Director; Sharhonda Deloach; Elena Epps; Rebecca Eyler; Jacqueline Mai; and Constantine Papanastasiou.
NARA manages the Federal Records Centers Program, which is to ensure the storage and preservation of federal records, including paper, photographic, audio, video, and film. Records storage facilities are required to meet certain minimum structural, environmental, property, and life safety standards set forth in federal regulations. GAO was requested to conduct a study of key aspects of the program. GAO's objectives were to (1) determine the types of facilities agencies use to store federal records and the extent to which NARA's data on agencies' use of storage facilities are complete, (2) evaluate the extent to which NARA has determined these facilities to be compliant with standards in 36 C.F.R. Part 1234, (3) determine what actions NARA has taken to minimize damage to records in federal records centers and the extent to which it documents such efforts, and (4) determine how NARA determines storage fees and whether fees differ among facilities. To do so, GAO obtained, analyzed, and corroborated documentation on records storage facilities, identified and compared records storage fees, and interviewed NARA officials. Agencies are to store federal records in three types of facilities: Federal records centers : The National Archives and Records Administration (NARA) operates 18 federal records centers that are comprised of 24 facilities (buildings) located across the United States. Each facility includes storage areas, referred to as bays. Agency records centers : Agencies also establish and operate records centers for storing their own records. As of May 2013, NARA had identified 18 agency records centers that were operated by 6 agencies or offices. Commercial records storage facilities : Agencies also use private sector commercial facilities. As of May 2013, agencies reported that 12 vendors provided 22 facilities, which were used by 11 agencies. These facilities notwithstanding, NARA does not know where all agencies are storing records. NARA has solicited data from agencies about their use of agency records centers and commercial records storage facilities, but not all agencies have submitted data. Further, the data agencies submitted--mostly from 2008 and 2009--are now outdated. As a result, NARA cannot be assured that all agencies are using one of the three types of authorized facilities. NARA determined that 20 of its 24 federal records center facilities were fully compliant with 36 C.F.R. Part 1234 because all of their bays satisfied the regulation; of the remaining 4, 2 facilities with inadequate shelving were partially compliant, 1 facility with insufficient protections against pipe leaks was not compliant, and the remaining facility was to be replaced. As of July 2013, NARA had plans to bring these 4 facilities into full compliance, but did not have a schedule for completing the plans at 2 of the facilities. As a result, NARA does not have a basis for determining progress toward correcting deficiencies in those facilities that do not yet fully meet the standards. Also, while NARA had approved 10 agency records centers and 22 commercial records storage facilities, it has inspected 1 of 18 agency records centers and 13 commercial records storage facilities. Until NARA completes planned inspections of all remaining facilities, it cannot be reasonably assured that agencies are storing records in facilities that meet standards. To facilitate the protection of federal records from permanent damage, NARA had generally taken steps consistent with a checklist it requires federal records centers to follow when incidents (e.g., roof or sprinkler leaks) occur. However, it did not always document the results of its efforts to minimize damage because the checklist does not include a step for doing so. Specifically, of the 55 incidents that occurred from January 2009 through March 2013, NARA provided documentation about the final outcome for 46 incidents. For the remaining 9 incidents, it could not provide documentation that included the final results of its efforts. Without a process that includes documenting the results of its efforts, NARA is not positioned to fully report on the effectiveness of its actions to minimize damage to federal records and to provide a third party with information to validate the results of its efforts. Storage fees are determined by NARA's Federal Records Centers Program and the Office of the Chief Financial Officer using the existing fee schedule, expense projections, and workload projections. The storage fees charged by NARA in fiscal year 2013 were comparable to fees charged by commercial vendors in that same time frame. For example, NARA's fee of $0.23 per cubic foot was higher than fees charged by two vendors and lower than fees charged by two other vendors. GAO recommends that NARA (1) obtain complete data on where agencies are storing records, (2) develop a schedule to bring noncompliant storage areas into compliance with 36 C.F.R. Part 1234, and (3) establish a requirement to document the results of efforts to minimize damage to federal records. NARA concurred with the recommendations.
The National Park System is one of the most visible symbols of who we are as a land and a people. As the manager of this system, the National Park Service is caretaker of many of the nation’s most precious natural and cultural resources, ranging from the fragile ecosystems of Arches National Park in Utah to the historic structures of Philadelphia’s Independence Hall and the granite faces of Mount Rushmore in South Dakota. Over the past 30 years, more than a dozen major studies of the National Park System by independent experts as well as the Park Service itself have pointed out the importance of guiding resource management through the systematic collection of data—sound scientific knowledge. The recurring theme in these studies has been that to manage parks effectively, managers need information that allows for the detection and mitigation of threats and damaging changes to resources. Scientific data can inform managers, in objective and measurable terms, of the current condition and trends of park resources. Furthermore, the data allow managers to make resource management decisions based on measurable indicators rather than relying on judgment or general impressions. Managing with scientific data involves both collecting baseline data about resources and monitoring their condition over time. Park Service policy calls for managing parks on this basis, and park officials have told us that without such information, damage to key resources may go undetected until it is so obvious that correcting the problem is extremely expensive—or worse yet, impossible. Without sufficient information depicting the condition and trends of park resources, the Park Service cannot adequately perform its mission of preserving and protecting these resources. While acknowledging the importance of obtaining information on the condition of park resources, the Park Service has made only limited progress in developing it. Our reviews have found that information about many cultural and natural resources is insufficient or absent altogether. This was particularly true for park units that feature natural resources, such as Yosemite and Glacier National Parks. I would like to talk about a few examples of the actual impact of not having information on the condition of park resources, as presented in our 1995 report. Generally, managers at culturally oriented parks, such as Antietam National Battlefield in Maryland or Hopewell Furnace National Historic Site in Pennsylvania, have a greater knowledge of their resources than managers of parks that feature natural resources. Nonetheless, the location and status of many cultural resources—especially archaeological resources—were largely unknown. For example, at Hopewell Furnace National Historic Site, an 850-acre park that depicts a portion of the nation’s early industrial development, the Park Service has never conducted a complete archaeological survey, though the site has been in the park system since 1938. A park official said that without comprehensive inventory and monitoring information, it is difficult to determine whether the best management decisions about resources are being made. The situation was the same at large parks established primarily for their scenic beauty, which often have cultural resources as well. For example, at Shenandoah National Park in Virginia, managers reported that the condition of more than 90 percent of the identified sites with cultural resources was unknown. Cultural resources in this park include buildings and industrial artifacts that existed prior to the formation of the park. In our work, we found that many of these sites and structures have already been damaged, and many of the remaining structures have deteriorated into the surrounding landscape. The tragedy of not having sufficient information about the condition and trends of park resources is that when cultural resources, like those at Hopewell Furnace and Shenandoah National Park, are permanently damaged, they are lost to the nation forever. Under these circumstances, the Park Service’s mission of preserving these resources for the enjoyment of future generations is seriously impaired. Compared with the situation for cultural resources, at the parks we visited that showcase natural resources, even less was known about the condition and trends that are occurring to natural resources over time. For example: — At California’s Yosemite National Park, officials told us that virtually nothing was known about the types or numbers of species inhabiting the park, including fish, birds, and such mammals as badgers, river otters, wolverines, and red foxes. — At Montana’s Glacier National Park, officials said most wildlife-monitoring efforts were limited to four species protected under the Endangered Species Act. — At Padre Island National Seashore in Texas, officials said they lacked detailed data about such categories of wildlife as reptiles and amphibians as well as mammals such as deer and bobcats. Park managers told us that—except for certain endangered species, such as sea turtles—they had inadequate knowledge about whether the condition of wildlife was improving, declining, or staying the same. This lack of inventory and monitoring information affects not only what is known about park resources, but also the ability to assess the effect of management decisions. After 70 years of stocking nonnative fish in various lakes and waterways in Yosemite, for example, park officials realized that more harm than good had resulted. Nonnative fish outnumber native rainbow trout by a 4-to-1 margin, and the stocking reduced the numbers of at least one federally protected species (the mountain yellow-legged frog). The Park Service’s lack of information on the condition of the vast array of resources it must manage becomes even more significant when one considers the fact that many known threats exist that can adversely affect these resources. Since at least 1980, the Park Service has begun to identify threats to its resources, such as air and water pollution or vandalism, and to develop approaches for dealing with them. However, our recent reviews have found that sound scientific information on the extent and severity of these threats is limited. Yet preventing or mitigating these threats and their impact is at the core of the agency’s mission to preserve and protect the parks’ resources. We have conducted two recent reviews of threats to the parks, examining external threats in 1994 and internal threats in 1996. Threats that originate outside of a park are termed external and include such things as off-site pollution, the sound of airplanes flying overhead, and the sight of urban encroachment. Protecting park resources from the damage resulting from external threats is difficult because these threats are, by their nature, beyond the direct control of the Park Service. Threats that originate within a park are termed internal and include such activities as heavy visitation, the impact of private inholdings within park grounds, and vandalism. In our nationwide survey of park managers, they identified more than 600 external threats, and in a narrower review at just eight park units, managers identified more than 100 internal threats. A dominant theme in both reports was that managers did not have adequate information to determine the impact of these threats and correctly identify their source. For the most part, park managers said they relied on judgment, coupled with limited scientific data, to make these determinations. For some types of damage, such as the defacement of archaeological sites, observation and judgment may provide ample information to substantiate the extent of the damage. But for many other types of damage, Park Service officials agree that observation and judgment are not enough. Scientific research will generally provide better evidence about the types and severity of damage occurring and any trends in the severity of the threats. Scientific research also generally provides a more reliable guide for mitigating threats. Two examples will help illustrate this point. In California’s Redwood National Park, scientific information about resource damage is helping mitigation efforts. Scientists used research data that had been collected over a period of time to determine the extent to which damage occurring to trees, fish, and other resources could be attributed to erosion from logging and related road-building activities. On the basis of this research, the park’s management is now in a position to begin reducing the threat by advising adjacent landowners on better logging and road-building techniques that will reduce erosion. The second example, from Crater Lake National Park in Oregon, shows the disadvantage of not having such information. The park did not have access to wildlife biologists or forest ecologists to conduct scientific research identifying the extent of damage occurring from logging and its related activities. For example, damage from logging, as recorded by park staff using observation and a comparison of conditions in logged and unlogged areas, has included the loss of habitat and migration corridors for wildlife. However, without scientific research, park managers are not in a sound position to negotiate with the Forest Service and the logging community to reduce the threat. The information that I have presented to you today is not new to the National Park Service. Park Service managers have long acknowledged that to improve management of the National Park System, more sound scientific information on the condition of resources and threats to those resources is needed. The Park Service has taken steps to correct the situation. For example, automated systems are in place to track illegal activities such as looting, poaching, and vandalism, and an automated system is being developed to collect data on deficiencies in preserving, collecting, and documenting cultural and natural resource museum collections. For the most part, however, relatively limited progress has been made in gathering information on the condition of resources. When asked why more progress is not being made, Park Service officials generally told us that funds are limited and competing needs must be addressed. Our 1995 study found that funding increases for the Park Service have mainly been used to accommodate upgraded compensation for park rangers and deal with additional park operating requirements, such as safety and environmental regulations. In many cases, adequate funds are not made available to the parks to cover the cost of complying with additional operating requirements, so park managers have to divert personnel and/or dollars from other activities such as resource management to meet these needs. In addition, we found that, to some extent, these funds were used to cope with a higher number of park visitors. Making more substantial progress in improving the scientific knowledge base about resources in the park system will cost money. At a time when federal agencies face tight budgets, the park system continues to grow as new units are added—37 since 1985, and the Park Service faces such pressures as higher visitation rates and an estimated $4 billion backlog of costs related to just maintaining existing park infrastructures such as roads, trails, and visitor facilities. Dealing with these challenges calls for the Park Service, the administration, and the Congress to make difficult choices involving how national parks are funded and managed. Given today’s tight fiscal climate and the unlikelihood of substantially increased federal appropriations, our work has shown that the choices for addressing these conditions involve (1) increasing the amount of financial resources made available to the parks by increasing opportunities for parks to generate more revenue, (2) limiting or reducing the number of units in the park system, and (3) reducing the level of visitor services. Regardless of which, if any, of these choices is made, without an improvement in the Park Service’s ability to collect the scientific data needed to properly inventory park resources and monitor their condition over time, the agency cannot adequately perform its mission of preserving and protecting the resources entrusted to it. This concludes my statement, Mr. Chairman. I would be happy to respond to any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its views on the National Park Service's (NPS) knowledge of the condition of the resources that the agency is entrusted to protect within the National Park System. GAO noted that: (1) GAO's work has shown that although NPS acknowledges, and its policies emphasize, the importance of managing parks on the basis of sound scientific information about resources, today such information is seriously deficient; (2) frequently, baseline information about natural and cultural resources is incomplete or nonexistent, making it difficult for park managers to have a clear knowledge about what condition the resources are in and whether the condition of those resources is deteriorating, improving, or staying the same; (3) at the same time, many of these park resources face significant threats, ranging from air pollution, to vandalism, to the development of nearby land; (4) however, even when these threats are known, NPS has limited scientific knowledge about the severity of them and their impact on affected resources; (5) these concerns are not new to NPS, and in fact, the agency has taken steps to improve the situation; (6) however, because of limited funds and other competing needs that must be completed, NPS has made relatively limited progress to correct this deficiency of information; (7) there is no doubt that it will cost money to make more substantial progress in improving the scientific knowledge base about park resources; (8) dealing with this challenge will require NPS, the administration, and the Congress to make difficult choices involving how parks are funded and managed; and (9) however, without such an improvement, NPS will be hindered in its ability to make good management decisions aimed at preserving and protecting the resources entrusted to it.
NPS Projected Returns from Concessioners. GAO/RCED-96-48R. November 28, 1995. BACKGROUND: Pursuant to a congressional request, GAO examined the assumptions that the National Park Service (NPS) used to project future financial returns to the government from concessioners through year 2002. GAO noted that NPS: (1) overstated its projections of future returns under the Concessions Policy Act by assuming it could increase franchise fees as contracts expired and that monies and franchise fees would remain in the same proportions; (2) overstated its projections of future returns under H.R. 773 and S. 309 by assuming that the bills would increase competition and that it would gradually extinguish the concessioners’ processory interest; and (3) understated its projections of future returns under H.R. 2491 by assuming that the bill’s performance incentive would impede competition. Land Management Systems: Progress and Risks in Developing BLM’s Land and Mineral Record System. GAO/AIMD-95-180. August 31, 1995. ABSTRACT: The Bureau of Land Management’s (BLM) Automated Land and Mineral Record System/Modernization, which is estimated to cost $428 million, is intended to improve BLM’s ability to record, maintain, and retrieve land description, ownership, and use information. To date, the Bureau has been completing most of the project’s tasks according to the schedule milestones set in 1993. In coming months, the work will become more difficult as BLM and the primary contractor try to complete, integrate, and test the new software system and meet the current schedule. The Bureau is trying to maintain the project schedule, but slippages may yet occur because little time was allocated to deal with unanticipated problems. BLM recently sought to obtain independent verification and validation to ensure that the new system software meets the Bureau’s requirements. A key risk remains, however. BLM’s plans include stress testing only a portion of the Automated Land and Mineral Record System/ Modernization, rather than the entire project, to ensure that all systems and technology can successfully process workloads expected during peak operating periods. By limiting the stress test, BLM cannot be certain that the system’s information technology will perform as intended during peak workloads. National Parks: Difficult Choices Need to Be Made About the Future of the Parks. GAO/RCED-95-238. August 30, 1995. ABSTRACT: GAO concludes that there is cause for concern about the health of national parks. Visitor services were deteriorating at most of the park units that GAO reviewed. Services were being cut back, and the condition of many trails, campgrounds, and other facilities was declining. Trends in resource management were less clear because most park managers lacked enough data to determine the overall condition of their parks’ natural and cultural resources. In some cases, parks lacked an inventory of the resources under their protection. Two factors strongly affected the levels of visitor services and the management of park resources—(1) additional operating requirements placed on parks by laws and administrative requirements and (2) increased visitation, which drives up the parks’ operating costs. These two factors seriously eroded funding increases since the mid-1980s. The national park system is at a crossroads. Although the system continues to grow, conditions at the parks have been deteriorating and the dollar amount of the maintenance backlog has soared from $1.9 billion in 1988 to more than $4 billion today. Congress is faced with the following difficult policy choices: (1) increasing the amount of financial resources going to the parks, (2) limiting or reducing the number of units in the park system, and (3) reducing the level of visitor services. The Park Service should be able to stretch available resources by operating more efficiently and by improving its financial management and performance measuring systems. Federal Lands: Information on the Use and Impact of Off-Highway Vehicles. GAO/RCED-95-209. August 18, 1995. would be further hampered. Some BLM and Forest Service locations have targeted their monitoring and enforcement to the most heavily used or the most environmentally sensitive lands. Also, some have formed coalitions with state governments, local communities, and private groups to supplement their resources for off-highway vehicle programs. As the agencies continue to inventory, map, and post signs to identify their off-highway vehicle areas, roads, and trails, they should be able to implement the executive orders more fully. Natural Resources Management Issue Area Plan: Fiscal Years 1996-97. GAO/IAP-95-16. August 1, 1995. BACKGROUND: GAO presented its Natural Resources Management issue area plan for fiscal years 1996 through 1997. FINDINGS: GAO plans to assess: (1) ways to obtain a better return on the sale or use of natural resources on federal lands or eliminate or reduce federal subsidies; (2) efficiency improvements within and coordination among the four primary federal land management agencies; (3) improvements in collaboration and consensus-building among federal and nonfederal stakeholders to address problems or issues related to natural resources; and (4) whether agencies are meeting existing production and conservation requirements. Federal Lands: Views on Reform of Recreation Concessioners. GAO/T-RCED-95-250. July 25, 1995. ABSTRACT: This testimony summarizes GAO’s work on federal policy governing the recreation concessioners and provides GAO’s views on four bills pending before Congress. Federal agencies’ concessions policies and practices are based on at least 11 different laws and, as a result, vary considerably. GAO concludes that more competition is needed in awarding concessions contracts and that the federal government needs to obtain a better return from concessioners for the use of its lands, including obtaining fair market value for the fees it charges ski operators. GAO supports the changes proposed by the four bills to current concessions policies and practices. National Parks: Views on the Denver Service Center and Information on Related Construction Activities. GAO/RCED-95-79. June 23, 1995. ABSTRACT: One of the major organizational units of the National Park Service is the Denver Service Center, which supports construction activities throughout the park system. The Center works with individual parks in planning, designing, and building projects, which range from rehabilitating historic structures to building new visitor centers to repairing and replacing utility systems. Parks are expected to use the Center’s services for projects costing more than $250,000, although exceptions are granted if the parks have the expertise needed for the projects and they receive approval from Park Service headquarters. In response to congressional concerns about the quality of services provided by the Center, GAO surveyed park managers on the quality and the timeliness of those services. This report also (1) describes how the Park Service sets priorities for funding construction projects and how the priorities may be modified during congressional consideration of the Park Service’s annual appropriations requests, (2) describes the process the Park Service uses to develop cost estimates for projects, and (3) provides information on the makeup of projects’ contingency and supervision funds. National Park Service: Difficult Choices Need to Be Made on the Future of the Parks. GAO/T-RCED-95-124. March 7, 1995. ABSTRACT: The overall level of visitor services offered by the National Park Service is deteriorating. Visitor services are being cut back and the condition of many trails, campgrounds, exhibits, and other facilities is declining. The Park Service estimates that since 1988 the backlog of deferred maintenance has more than doubled to $4 billion. The following two factors have had a major impact on the level of visitor services and resource management activities: (1) additional operating requirements resulting from more than 20 federal laws affecting the parks and (2) an increase in the number of visitors. Since substantial increases in appropriations seem unlikely in today’s tight budget climate, difficult choices must be made on the future of the national parks. These choices involve generating more revenue within the parks, limiting the number of parks in the system, and reducing the level of visitor services and expectations. Federal Lands: Information on Land Owned and on Acreage with Conservation Restrictions. GAO/T-RCED-95-117. March 2, 1995. ABSTRACT: During fiscal years 1964-93, the amount of federal land managed by the Forest Service, the Bureau of Land Management, the Fish and Wildlife Service, and the National Park Service decreased by 77 million acres, from about 700 million acres to about 623 million acres. However, the decrease is skewed because of two unique land transfers in Alaska—the transfer of about 76 million acres of federal land to the state of Alaska in accordance with the Alaska Statehood Act of 1958 and the transfer of about 36 million acres to native Alaskans in accordance with the Alaska Native Claims Settlement Act of 1971. Excluding these two large land transfers, the amount of land managed by the four agencies actually increased by 34 million acres. During the same 29-year period, the number of acres managed by the four agencies that were set aside for conservation purposes increased from about 51 million acres at the end of fiscal year 1964 to about 271 million acres at the end of fiscal year 1993. National Park Service: Better Management and Broader Restructuring Efforts Are Needed. GAO/T-RCED-95-101. February 9, 1995. ABSTRACT: The National Park Service lacks necessary financial data, internal controls, and performance measures that would allow the agency to shift resources among competing goals, rank priorities so that the most pressing issues receive attention, and link the agency’s planning process directly to budget decisions to better allocate resources. Although the Park Service’s restructuring plan addresses some of the challenges facing the agency, such as the need to meet the demands of an expanding system, growing numbers of visitors, and increasingly complex resource protection problems, the plan does not address the potential to improve operations through land management collaboration among Interior’s three land management agencies and Agriculture’s Forest Service. It also does not consider which functions or programs could be eliminated or turned over to state or local governments or to the private sector. National Parks: Information on the Condition of Civil War Monuments at Selected Sites. GAO/RCED-95-80FS. February 1, 1995. and pedestals suffering from the following problems: broken or missing parts, chips and cracks, and wear and erosion. The most common causes of these problems are weathering and vandalism. Other causes include erosion, structural deficiencies, and neglect. Park officials estimate the cost to repair 34 of the monuments at $2,403,000. Cost estimates were not provided for the other 20 monuments because officials were unsure what work was needed or how much it would run. Federal Lands: Information on Land Owned and on Acreage with Conservation Restrictions. GAO/RCED-95-73FS. January 30, 1995. ABSTRACT: During fiscal years 1964-93, the amount of federal land managed by the Forest Service, the Bureau of Land Management, the Fish and Wildlife Service, and the National Park Service decreased by 77 million acres, from about 700 million acres to about 623 million acres. However, the decrease is skewed because of two unique land transfers in Alaska—the transfer of about 76 million acres of federal land to the state of Alaska in accordance with the Alaska Statehood Act of 1958 and the transfer of about 36 million acres to native Alaskans in accordance with the Alaska Native Claims Settlement Act of 1971. Excluding these two large land transfers, the amount of land managed by the four agencies actually increased by 34 million acres. During the same 29-year period, the number of acres managed by the four agencies that were set aside for conservation purposes increased from about 51 million acres at the end of fiscal year 1964 to about 271 million acres at the end of fiscal year 1993. GAO summarized this report in testimony before Congress; see: Federal Lands: Information on Land Owned and on Acreage With Conservation Restrictions, by John H. Anderson, Jr., Associate Director for Natural Resources Management Issues, before the House Committee on Resources. GAO/T-RCED-95-117, Mar. 2, 1995 (11 pages). Monuments at Vicksburg National Military Park. GAO/RCED-95-55R. November 15, 1994. officials estimate that it could cost more than $1 million to repair one of the monuments and $3,200 to $4,000 to repair the other monument. Forest Service: Land Acquisitions Within the Lake Tahoe Basin. GAO/RCED-95-22. October 31, 1994. ABSTRACT: The Santini-Burton Act, enacted in 1980, authorized the sale of about 7,000 acres of federal lands within Clark County, Nevada, to allow more orderly development of the communities there. The federal lands were owned by the Bureau of Land Management. The act also required the bulk of the proceeds from the land sales to be used for a buyout program in which the government would purchase environmentally sensitive private lands around Lake Tahoe in an effort to stem further degradation of the lake. Concerns have been raised about whether property owners in the Lake Tahoe Basin have been treated fairly when the lands were acquired under the act. This report determines the extent to which (1) the Forest Service acquired lands within the basin under the act’s buyout program, (2) the classification of lands within the basin as environmentally sensitive may have harmed their value, and (3) the Forest Service’s acquisition of environmentally sensitive land in the basin may have involved the federal government taking private property under the Fifth Amendment to the U.S. Constitution. National Park Service: Reexamination of Employee Housing Program Is Needed. GAO/RCED-94-284. August 30, 1994. ABSTRACT: Since 1916, the National Park Service has provided rental housing in parks to many of its employees. The Park Service has an inventory today of about 4,700 housing units. Nearly half of the housing inventory is more than 30 years old. Park Service estimates of what it would cost to repair, rehabilitate, repair, and replace this housing inventory have increased significantly during the past several years; the total estimate is now more than half a billion dollars. This report (1) describes the Park Service’s housing program and compares it with the housing programs run by two other large land management agencies—the Forest Service and the Bureau of Land Management—and (2) identifies options that are available to the Park Service to deal with its housing problems. Federal Lands: Fees for Communications Sites Are Below Fair Market Value. GAO/RCED-94-248. July 12, 1994. ABSTRACT: The Forest Service and the Bureau of Land Management (BLM) are the two major federal agencies whose lands are used as sites to broadcast radio, television, and other electronic signals. These sites, mainly located in the western United States, are for the most part leased to private entities that build and operate communications facilities. The annual fees being charged for such communications sites are, in many cases, significantly below fair market value. Forest Service and BLM officials estimate that charging fees on the basis of fair market value would boost total federal revenues by more than 500 percent—from about $4 million to about $23 million annually. Although the Forest Service and BLM have been trying to set fees reflecting fair market value, annual appropriations legislation has limited the amount by which these fees can be increased. As long as these limits are in effect, the fees charged will not reflect fair market value. Both the Forest Service and BLM lack reliable and complete information needed to manage their communications site programs. In addition, many unauthorized communications users are operating on Forest Service lands, and annual inspections to ensure that the sites are properly maintained are rarely done. GAO summarized this report in testimony before Congress; see: Federal Lands: Fees for Communications Sites Are Below Fair Market Value, by John H. Anderson, Jr., Associate Director for Natural Resources Management Issues, before the Subcommittee on the Environment, Energy, and Natural Resources, House Committee on Government Operations, and the Subcommittee on Natural Parks, Forests, and Public Lands, House Committee on Natural Resources. GAO/T-RCED-94-262, July 12 (13 pages). Federal Lands: Fees for Communications Sites Are Below Fair Market Value. GAO/T-RCED-94-262. July 12, 1994. BLM have been trying to set fees reflecting fair market value, annual appropriations legislation has limited the amount by which these fees can be increased. As long as these limits are in effect, the fees charged will not reflect fair market value. Both the Forest Service and BLM lack reliable and complete information needed to manage their communications site programs. In addition, many unauthorized communications users are operating on Forest Service lands, and annual inspections to ensure that the sites are properly maintained are rarely done. Natural Resources: Lessons Learned Regarding Public Land Withdrawn for Military Use. GAO/T-NSIAD-94-227. June 29, 1994. ABSTRACT: Military operations had not been hampered at the six withdrawn sites GAO visited in Alaska, Arizona, Nevada, and New Mexico, but these operations had constrained resource management activities. Military commanders at five of the sites said that they changed some training exercises to accommodate concerns for wildlife; at one site, officials expressed concern about meeting training needs because of environmental constraints. However, the Defense Department restricted access to three sites, making it difficult for the Bureau of Land Management (BLM) to carry out resource management activities. Such restrictions and the overall military presence have led BLM to assign a low priority to resource management on military lands. A lack of information on resource conditions prevents an overall assessment of the impacts. The six sites could improve resource management by enhancing interagency cooperation and by strengthening systems to monitor resource management actions. Resource management at the Goldwater Range in Arizona was an example of effective cooperation between BLM and the military. Federal Lands: Land Acquisitions Involving Nonprofit Conservation Organizations. GAO/RCED-94-149. June 15, 1994. Department buy land from or with the help of nonprofits, (2) adequacy of controls for protecting the government’s interest in such acquisitions, and (3) extent to which nonprofits realize large financial gains in such transactions. Hawaiian Homelands: Hawaii’s Efforts to Address Land Use Issues. GAO/RCED-94-24. May 26, 1994. ABSTRACT: Although the Interior and Justice Departments maintain that the federal government has never had a trust responsibility to native Hawaiians, the state of Hawaii disagrees. Hawaiian state courts and the state’s Attorney General have concluded that the federal government had a trust responsibility during the territorial period, and the state’s Attorney General believes that such a responsibility continues today. In GAO’s opinion, territorial governors lacked authority to withdraw Hawaiian homelands for nonfederal public purposes through executive orders and proclamations. However, many of these unauthorized withdrawals appear to have (1) benefitted native Hawaiians or (2) involved lands that were unsuitable for authorized homeland uses, such as homesteading or leasing, during the territorial period. Territorial governors also lacked authority under the Hawaiian Homes Commission Act to withdraw homelands for federal purposes through executive orders or other means. Because such withdrawals took place more than 50 years ago, there is no guarantee that all information relevant to these withdrawals is still available. Therefore, GAO is unable to express an opinion on the propriety of homeland withdrawals for federal purposes. GAO believes that the methodology used by a consultant to the state to estimate the lost income from and the current market value for parcels of lands was generally reasonable. Natural Resources: Defense and Interior Can Better Manage Land Withdrawn for Military Use. GAO/NSIAD-94-87. April 26, 1994. that made it hard for the Bureau of Land Management (BLM) to carry out resource management activities. These restrictions and the military presence led BLM to assign a low priority to resource management on military land. At three sites, BLM allocated considerably less money to manage lands used for military training than for other property under its care. All six sites had opportunities to improve resource management by strengthening cooperation between BLM and the military and by beefing up monitoring of progress on planned resource management actions. This report includes photographs of the terrain at the six sites. Hurricane Iniki Expenditures. GAO/RCED-94-132R. April 18, 1994. BACKGROUND: GAO reviewed whether the U.S. Fish and Wildlife Service (FWS) used emergency appropriated funds for the repair and replacement of national wildlife refuge facilities damaged by Hurricane Iniki. GAO noted that: (1) FWS did not have authorization to use emergency funds for reconstruction work at two refuges; (2) FWS planned to use emergency funds for enlarging selected buildings at one refuge and remodeling the visitors’ center at another refuge; (3) approximately $12.8 million in emergency disaster assistance was appropriated to FWS for construction projects; and (4) of the amount appropriated, FWS allocated $6.2 million for the rehabilitation of the refuges. Forest Service Management: Issues To Be Considered in Developing a New Stewardship Strategy. GAO/T-RCED-94-116. February 1, 1994. ABSTRACT: Budgetary constraints and scientific findings on ecosystem management will challenge the Forest Service as never before to find new ways to achieve program goals with fewer resources. GAO testified that the Forest Service needs to work closely with Congress to get a better return on the sale or use of natural resources on public lands. It also needs to work with Congress and other federal land management agencies to find ways to work more efficiently and to manage federal lands so as to preserve the nation’s natural resources and sustain their long-term economic productivity. GAO believes that a coordinated interagency strategy may be needed to link Forest Service reforms to changes being considered by other federal land management agencies. The goal would be to coordinate and integrate these programs, activities, and functions so that these agencies can function as a unit on the local level. The Gettysburg Address: Issues Related to Display and Preservation. GAO/RCED-94-12. January 26, 1994. ABSTRACT: Of the five known original manuscripts of the Gettysburg Address, two are in the collection of the Library of Congress. Since 1979, the Library’s two drafts have been displayed during the spring and the summer at the Gettysburg National Military Park, which is run by the Park Service. The Library plans to substitute a high-quality facsimile for display at the park after 1994, a move the Park Service opposes. This report discusses (1) the risks inherent in exhibiting a draft at the park or elsewhere, (2) whether the Park Service has met the Library’s exhibition and preservation requirements and can meet future requirements, and (3) the estimated cost of exhibiting the document at the park in the current or an upgraded facility versus the cost of building a comparable facility at the Library. GAO notes that exhibiting the drafts at the park has allowed millions of Americans to see the original documents in a historic setting and that the Park Service seems capable of meeting evolving exhibition and preservation requirements. The conference report accompanying the fiscal year 1994 legislative branch appropriations act supports exhibiting an original draft in Gettysburg and encourages the Library and the Park Service to reopen discussions on extending the loan of the address. Ultimately, it is Congress’ call as to where the drafts should be displayed. National Park Service: Activities Outside Park Borders Have Caused Damage to Resources and Will Likely Cause More. GAO/RCED-94-59. January 3, 1994. wildlife and habitat. Furthermore, they expect that virtually all of the threats will inflict additional damage during the next five years. Although park managers say that action has been taken to counter more than half of the threats, this typically involved community outreach, which requires the cooperation of multiple parties and often is the first step toward minimizing damage to park resources. Federal Lands: Public Land Access. GAO/T-RCED-94-72. November 9, 1993. ABSTRACT: According to managers at the Forest Service and the Bureau of Land Management (BLM), access to more than 50 million acres of public land in the United States is inadequate, a situation that can potentially reduce the public’s recreational opportunities and interfere with the government’s land management. Private landowners are increasingly unwilling to grant public access across their land because of concerns about vandalism and potential liability or because of desires for privacy or exclusive personal use. To overcome access problems, the Forest Service and BLM may acquire all rights and interests associated with the land or obtain perpetual easements. In fiscal years 1989-91, the Forest Service and BLM acquired permanent legal public access to about 4.5 million acres of federal land. The two agencies had plans as of October 1991 to open another 9.3 million acres of federal land to the public. Fisheries Management: Administration of the Sport Fish Restoration Program. GAO/RCED-94-4. November 8, 1993. ABSTRACT: The long-term decline in the quality of sport fishing in the United States prompted the creation in 1950 of the Sport Fish Restoration Program, which seeks to restore, conserve, and enhance the nation’s sport fishery resources. During fiscal years 1998-92, the program received nearly $1 billion in federal funding. In response to congressional concerns about the program’s rapid expansion and about whether program money was being used for its intended purposes, this report determines (1) the extent to which the Fish and Wildlife Service (FWS) used these funds to run the program, (2) whether FWS’ use of program funds for special investigations helped the agency to achieve the program’s goals, (3) whether the states allocated the required amount of funds to freshwater and marine projects, and (4) the extent to which the states programmed funds to enhance fish habitat. GAO limited its review to five coastal states—California, Florida, North Carolina, Texas, and Washington—that historically have either received the largest apportionments of program funds or have underwritten a diverse range of sport fish projects. Department of the Interior: Transfer of the Presidio From the Army to the National Park Service. GAO/RCED-94-61. October 26, 1993. ABSTRACT: The proposed uses of the Presidio Army Post under the Park Service’s preferred alternative are generally consistent with the goal of creating a Golden Gate National Recreation Area. The extent to which the costs to rehabilitate the Presidio’s buildings and run the Presidio will be offset by tenant payments and philanthropic donations, however, is unknown. Thus, the level of future annual appropriations needed to manage the Presidio cannot be estimated with any certainty at this time. Given the costs and the potential impact of the Presidio’s rehabilitation on the Park Service’s deferred maintenance and reconstruction backlog, close oversight by the Department of the Interior and Congress is warranted. In addition, once an alternative for managing the Presidio is decided upon, the Park Service needs to establish a plan that will (1) prioritize the objectives, (2) identify their associated costs and funding sources, and (3) estimate their completion dates. GAO summarized this report in testimony before Congress; see: Department of the Interior: Transfer of the Presidio From the Army to the National Park Service, by James Duffus III, Director of Natural Resources Management Issues, before the Subcommittee on National Parks, Forests and Public Lands, House Committee on Natural Resources. GAO/T-RCED-94-64, Oct. 26, 1993 (11 pages). Department of the Interior: Transfer of the Presidio From the Army to the National Park Service. GAO/T-RCED-94-64. October 26, 1993. ABSTRACT: The proposed uses of the Presidio Army Post under the Park Service’s preferred alternative are generally consistent with the goal of creating a Golden Gate National Recreation Area. The extent to which the costs to rehabilitate the Presidio’s buildings and run the Presidio will be offset by tenant payments and philanthropic donations, however, is unknown. Thus, the level of future annual appropriations needed to manage the Presidio cannot be estimated with any certainty at this time. Given the costs and the potential impact of the Presidio’s rehabilitation on the Park Service’s deferred maintenance and reconstruction backlog, close oversight by the Department of the Interior and Congress is warranted. In addition, once an alternative for managing the Presidio is decided upon, the Park Service needs to establish a plan that will (1) prioritize the objectives, (2) identify their associated costs and funding sources, and (3) estimate their completion dates. National Park Service: Condition of and Need for Employee Housing. GAO/RCED-93-192. September 30, 1993. ABSTRACT: The National Park Service, which has been housing park employees since 1916, now has an inventory of about 5,200 housing units. Park Service records suggest that about 40 percent of this inventory is in “good” or “excellent” condition, needing no more than routine maintenance; about 15 percent was rated “poor” to “obsolete,” requiring extensive repairs. Most of the Park Service housing is used to shelter (1) seasonal employees, (2) permanent employees at isolated parks, and (3) permanent employees at more-accessible parks who provide visitor services or protect government property. GAO questioned the justification for about 12 percent of the units. For example, at 11 nonisolated parks GAO visited, park managers subjectively determined the need for housing instead of relying on an analysis of local housing availability, as required by Park Service guidance. GAO could not verify the accuracy of the Park Service’s $546 million estimate for employee housing repair and replacement. Park Service officials claim that a sizable backlog of repairs exists because rental income has covered only about half of all maintenance costs and operating funds have not been enough to make up the difference. Rental income has been limited because (1) the Park Service reduces its rates because of factors such as isolation and lack of amenities and (2) Congress has set a cap on rental rate increases. Bureau of Reclamation: Unauthorized Recreation Facilities at Two Reclamation Projects. GAO/RCED-93-115. September 16, 1993. forced to comply with the law or obtain specific congressional authorization to continue running the facilities at taxpayers’ expense. Federal Lands: Improvements Needed in Managing Short-Term Concessioners. GAO/RCED-93-177. September 14, 1993. ABSTRACT: Nationwide, about 6,000 short-term agreements (of 5 years or less) exist under which concessioners provide goods and services to the public on federal land. The services they provide include sightseeing tours and guided fishing, hunting, and rafting trips. This is one in a series of reports on concessioners’ operations on federal recreation land. GAO reviews the federal government’s policy and practices for (1) evaluating short-term concessioners’ overall performance; (2) ensuring that short-term concessioners comply with federal, state, and local health and safety laws and regulations; (3) ensuring the reasonableness of the prices charged to the public by short-term concessioners; and (4) setting fees for the use of federal land by short-term concessioners. Federal Land: Little Progress Made in Improving Oversight of Concessioners. GAO/T-RCED-93-42. May 27, 1993. ABSTRACT: Concessioners are often the primary operators in recreation areas containing some of the nation’s greatest national treasures. Despite GAO’s warnings during the past three years, however, federal agencies are still doing a poor job of managing concessioners on federal land. The agencies lack enough data to adequately manage concession operations, they cannot demonstrate that they are receiving a fair return, and they have to revise or develop some policies to improve their management of concessioners. Deterioration in federal areas is widespread, and the existing infrastructure—approaching $200 billion in value—is increasingly run down; the cost of deferred maintenance in the national parks and forests alone is nearly $3 billion. The federal government has a huge investment in its recreation resources, and GAO believes that the government needs to ensure that it is fairly compensated for the use of its land, the visiting public is provided with healthy and safe services, and the nation’s recreation resources are adequately protected for future generations. National Park Service: Scope and Cost of America’s Industrial Heritage Project Need to Be Defined. GAO/RCED-93-134. May 14, 1993. ABSTRACT: America’s Industrial Heritage Project consists of several sites scattered throughout southwestern Pennsylvania that will explain how the region’s iron and steel, coal, and transportation industries contributed to the nation’s industrial growth. The project is expected to revitalize the region’s economic base through tourism. Much uncertainty exists, however, about the development and the completion of the project. Although one estimate pegs the cost of completing the project at about $355 million, including $155 million in federal funds, this estimate lacks documentation and the final scope of the project has yet to be defined. Uncertainty also exists about the operation and maintenance of project sites on nonfederal land. Although some of the sites will be run by the National Park Service, other projects built on nonfederal land are to be run by nonfederal entities. Yet GAO was told that federal funds will be used for up to 5 years to run several projects on nonfederal land. Finally, it is unclear who will be responsible for managing, operating, and maintaining the projects. The Southwestern Pennsylvania Heritage Commission, part of the Interior Department, has been overseeing the project’s implementation, but the Commission’s term expires in November 1998. Although the Commission favors the establishment of a not-for-profit corporation that would run all the projects, it has yet to make a final choice among the options being considered. Forest Service: Little Assurance That Fair Market Value Fees Are Collected From Ski Areas. GAO/RCED-93-107. April 16, 1993. ABSTRACT: Ski operators on Forest Service land are required to pay the government fees that are based on fair market values. Although these ski operators had $737 million in gross sales in 1991, they paid the government only about $13.5 million in fees. GAO concludes that the current fee system does not ensure that the Forest Service receives fair market values for using its land. When the graduated rate fee system was put into place more than 20 years ago, it was expected that rates would be adjusted periodically to reflect economic changes. Yet the rates by which fees are calculated have not been updated in more than two decades. The fee system developed by the ski industry also does not deliver fees that reflect fair market value. GAO agrees with the Forest Service that a simplified system is desirable. The goal of developing a simpler system, however, must be secondary to ensuring that fees are based on fair market value. Natural Resources Management: Issues to Be Considered by the Congress and Administration. GAO/T-RCED-93-5. February 2, 1993. ABSTRACT: This testimony discusses GAO’s December 1992 transition series report entitled Natural Resources Management Issues (GAO/OCG-93-17TR). During this era of budgetary constraints, Congress and the new administration face hard choices in how to protect the nation’s natural resources. Current funding is inadequate to handle the declining condition of the nation’s natural resources and related infrastructure on federal lands. A number of proposals to obtain a better return for the sale or use of natural resources have not succeeded, GAO believes, because (1) the full extent of the staffing and funding shortfalls facing federal natural resources management agencies has not been clearly articulated and (2) the proposals and the dialogue surrounding them have not focused on the need to encourage uses that are compatible with sustaining the nation’s natural resources for future generations. Natural Resources Management Issues. GAO/OCG-93-17TR. December 1992. see: Major Issues Facing a New Congress and a New Administration, by Charles A. Bowsher, Comptroller General of the United States, before the Senate Committee on Governmental Affairs. GAO/T-OCG-93-1, Jan. 8, 1993 (30 pages). National Parks: Issues Involved in the Sale of the Yosemite National Park Concessioner. GAO/RCED-92-232. September 10, 1992. ABSTRACT: In response to the Department of the Interior’s concerns about foreign ownership of the major concessioner service in Yosemite National Park, the Yosemite Park and Curry Company, owned by a Japanese company, was sold to an American purchaser for $61.5 million. The new concessioner should have enough revenue to pay the promissory note for the purchase price, cover operating expenses, and make a reasonable profit. GAO has not, however, reviewed the assumptions that the Park Service used to calculate its cash flow projections. Although the new concessioner will be required to implement some portion of the 1980 Yosemite General Management Plan, which seeks to reduce congestion in the park, the Park Service has not yet finalized what those requirements or the associated cost will be. Finally, the transfer of interests in the agreement between the Curry Company’s parent firm—MCA, Inc.—and the middleman in the sale—the National Park Foundation—does not constitute a gift to the Foundation. Accordingly, the Foundation’s participation in the agreement is unauthorized. Additionally, the Foundation’s involvement appears to have been unnecessary to completing the transaction, since Interior is authorized to enter into such transactions directly. The Foundation appears to have been acting on Interior’s behalf. Arlington National Cemetery: Improvements to the Superintendent’s Lodge. GAO/RCED-92-208. August 13, 1992. authority of the superintendent to approve expenditures. The Army is also reassessing the amount that the superintendent will be required to pay for rent, utilities, and other services. National Park Service: Policies and Practices for Determining Concessioners’ Building Use Fees. GAO/T-RCED-92-66. May 21, 1992. ABSTRACT: While national park concessioners using federally owned facilities—including lodges, restaurants, and horse corrals—report gross revenues of up to tens of millions of dollars, many only pay a pittance for use of these properties. Poor management by the National Park Service and lack of data, however, make it impossible to determine whether the government is getting a fair return on the use of its facilities. A lack of policy guidance has led to inconsistent determinations of building use fees. Furthermore, a lack of complete and centralized data has left the Park Service in a quandary as to how many concession agreements contain the assignment of federally owned facilities; how many federally owned facilities are used by concessioners; and what other agreements have been reached on the repair, maintenance, and improvement of these facilities. As a result of this lack of data, the total compensation for the use of federally owned facilities is unknown. Federal Lands: Reasons for and Effects of Inadequate Public Access. GAO/RCED-92-116BR. April 14, 1992. ABSTRACT: The public’s access to more than 50 million acres, or 14 percent, of the land managed by the Forest Service and the Bureau of Land Management (BLM) is considered inadequate by agency managers. Private landowners’ unwillingness to grant public access to their land has increased during the past decade as the public’s use of public land has increased. Factors contributing to inadequate access are private landowners’ concerns about vandalism and potential liability or their desire for privacy and exclusive personal use. To resolve the public access problem, the Forest Service and BLM can acquire either all rights and interests associated with the land (fee simple acquisition) or perpetual easements (limited controls over the land that are binding on succeeding owners). In fiscal years 1989-91, the Forest Service and BLM acquired permanent, legal public access to about 4.5 million acres of federal land. As of October 1991, the two agencies had about 3,300 actions pending to open another 9.3 million acres of federal land to the public. Federal Lands: Oversight of Long-Term Concessioners. GAO/RCED-92-128BR. March 20, 1992. ABSTRACT: Nationwide, the federal government has about 1,500 long-term agreements (five years or more) with private concessioners for recreation services ranging from ski resort operations to raft trips. These concessioners operate on land managed by six federal agencies. This report examines the (1) concessioners’ overall performance; (2) concessioners’ compliance with federal, state, and local health and safety standards; and (3) reasonableness of prices concessioners charge the public for services. Federal Lands: Status of Land Transactions Under Four Federal Acts. GAO/RCED-92-70BR. December 3, 1991. ABSTRACT: This briefing report reviews the status of federal land transactions authorized under four acts—the El Malpais National Monument and National Conservation Area; the Nevada-Florida Land Exchange Authorization Act of 1988; the Apex Project, Nevada Land Transfer and Authorization Act of 1989; and the Targhee National Forest Land Exchange Act. GAO discusses (1) actions taken to complete the land transactions and (2) the use and development of the lands transferred out of federal ownership. National Park Service: Status of Development at the Steamtown National Historic Site. GAO/T-RCED-92-6. October 22, 1991. ABSTRACT: The Steamtown National Historic Site, established in 1986, encompasses about 63 acres of land that formerly comprised a rail yard in Scranton, Pennsylvania. The site is intended to provide year-round facilities and programs to educate visitors about the role of steam railroads in the expansion of the United States. This testimony discusses the status of the site and notes that various uncertainties raise questions about (1) the reliability of the $63 million estimated cost to complete site development, (2) identifying and disposing of hazardous and toxic wastes at the site, and (3) the feasibility of the planned rail excursion lines to surrounding locations. National Park Service: Selected Visitor and Cost Data. GAO/RCED-91-247FS. September 30, 1991. ABSTRACT: This fact sheet provides information on aspects of National Park Service operations. GAO (1) presents data on visitor accidents and fatalities and criminal offenses reported at the parks; (2) discusses the Park Service’s hazardous waste program; and (3) provides a list of parks created since 1970 that have, or are projected to have, land acquisition and construction appropriations exceeding $40 million. National Park Service: Cost Estimates for Two Proposed Park Facilities in Texas. GAO/RCED-91-218BR. September 3, 1991. ABSTRACT: This briefing report analyzes the cost estimates for the proposed visitor center at San Antonio Missions National Historical Park in San Antonio, Texas, and a headquarters/visitor center and separate maintenance facility at Big Thicket National Preserve, which is north of Beaumont, Texas. The Park Service’s initial cost estimate for the San Antonio facility is $8.63 million. To date, about $200,000 has been spent on planning and design work. The Park Service’s initial cost estimate for the Big Thicket facilities is $8.41 million. So far, about $564,000 has been spent on planning, and about $3.1 million has been spent on the maintenance facility. Bureau of Reclamation: Federal Interests Not Adequately Protected in Land-Use Agreements. GAO/RCED-91-174. July 11, 1991. used to set such fees; (3) Bureau instructions governing land-use agreements do not address the issue of public access or public-use fees; (4) Scottsdale did not compensate the Bureau for the use of its lands because local Bureau officials decided that no fee compensation was warranted under the agreements, since leasing the lands supported the Bureau’s goal of providing its land for recreation; and (5) the Bureau had authority to enter into agreements to promote the development of land in the public interest for recreation, but typically negotiated such agreements at the regional or local level and did not maintain centralized information, making it difficult to determine whether similar agreements were pending. Bureau of Reclamation: Land-Use Agreements With the City of Scottsdale, Arizona. GAO/T-RCED-91-74. July 11, 1991. BACKGROUND: GAO discussed two recreation land-use agreements between the Bureau of Reclamation and Scottsdale, Arizona, to determine whether the: (1) agreement terms and conditions are consistent with federal law; (2) approved activities are consistent with applicable agency policies and guidance; and (3) potential exists for the Bureau to enter into similar agreements elsewhere. GAO found that: (1) while the agreements themselves were not contrary to federal law, the absence of comprehensive oversight policies and guidance led local officials to base many key agreement decisions on personal judgment; (2) the law does not require nor preclude federal government compensation for the use of its lands; (3) Scottsdale did not compensate the government for the use of its lands because local Bureau officials determined that leasing the lands supported the Bureau’s goal of providing its lands for recreation; (4) since Bureau guidance governing land-use agreements does not address the issue of public access, local Bureau officials approved a reservation policy at a golf complex that limits public use; (5) the Bureau has not developed guidance on establishing public-use fees for recreational activities on its lands; and (6) although similar agreements are being negotiated at the regional or local level, the Bureau does not maintain centralized information, making it difficult to determine whether similar agreements were pending. Federal Lands: Improvements Needed in Managing Concessioners. GAO/RCED-91-163. June 11, 1991. (3) total return to the government from concession operations; and (4) federal recreation resources management practices of the National Park Service, Bureau of Land Management, U.S. Fish and Wildlife Service, Bureau of Reclamation, Forest Service, and Army Corps of Engineers. FINDINGS: GAO found that: (1) no single law authorizing concession operations existed; (2) none of the agencies maintained a complete database identifying the number and types of concession agreements; (3) the agencies could not determine total compensation to the federal government for the use of federal recreational resources, due to incomplete financial data and unreported nonfee considerations; (4) the agencies identified 11 different laws governing concession agreements and operations, many of which were agency-specific and allowed for broad discretion in establishing policies; (5) complete financial data were available for only 60 percent of over 9,000 concession agreements reported by the agencies; (6) some agencies permitted field offices to accept such nonfee compensation as capital improvements from concessioners, but the offices generally did not report such agreements to headquarters; (7) from those concessioners who reported complete financial data in 1989, the federal government received about $35 million in concession fees, with gross concession revenues of about $1.4 million, representing an average return to the government of about 2 percent; and (8) various fee approaches by the six agencies resulted in concessioners paying different fees to operate similar activities. Forest Service: Difficult Choices Face the Future of the Recreation Program. GAO/RCED-91-115. April 15, 1991. conditions and resource needs; (5) funding could be increased through appropriations, although that could be difficult in an era of fiscal constraint and competing demands; (6) the Service would require legislative changes to impose higher fees; (7) increasing the use of volunteers and cost-share programs could increase funds, but not to the level of the resources needed; (8) in lieu of funding increases, the Service could still meet its current maintenance standards if it reduced the number of sites and areas to be developed and maintained, but that action could further strain existing sites and areas due to increased use; and (9) the Service could lower its development and maintenance standards to more closely match the resources available, but that could result in providing the public with a lower-quality recreational experience. Budget Issues: Funding Alternatives for Fire-Fighting Activities at USDA and Interior. GAO/AFMD-91-45. April 4, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Departments of Agriculture’s (USDA) and the Interior’s budgetary treatment of fire-fighting activities during fiscal years (FY) 1988 through 1990, focusing on alternative: (1) methods of funding fire-fighting activities; and (2) budget approaches for funding both emergency and nonemergency fire-fighting activities. FINDINGS: GAO found that: (1) the agencies’ budgetary treatment of fire activities improved from between FY 1988 and FY 1990; (2) in 1990, Congress established a separate account to fund fire-fighting activities in both USDA and Interior; (3) in 1990, both agencies requested greater appropriations than in previous years, and received amounts from Congress that came closer to meeting actual fire-fighting needs; (4) USDA and Interior began to use consistent terms to categorize and track various fire costs to better determine whether certain costs applied to emergency or nonemergency activities; (5) despite such improvements, the agencies’ continued use of transfer authority to fund emergency activities created difficulties, since the agencies could not anticipate such transfers in making budget estimates; (6) such transfers allowed the agencies to use funds originally intended for emergency fire activities to support nonemergency activities; and (7) alternative methods for funding emergency fire activities within the budget, included the provision of annual or periodic appropriations for emergency and nonemergency activities. Recreation Concessioners Operating on Federal Lands. GAO/T-RCED-91-16. March 21, 1991. BACKGROUND: GAO discussed the issue of concession-operated recreation services on federal lands, concerning the National Park Service (NPS), Bureau of Land Management, Fish and Wildlife Service, Bureau of Reclamation, Forest Service, and the Army Corps of Engineers. GAO noted that: (1) since there was no single law authorizing concession operations for all six agencies, the agencies’ policies regarding the types, terms, and fees of such agreements were significantly different and inconsistent; (2) the total number of concession agreements was not known or documented by any of the six agencies; (3) due to either incomplete data and non-fee compensations, the total amount of federal compensation for the use of its recreational resources was not known; (4) the six agencies received about $32 million in fees from gross concession revenues of $1.5 billion, and NPS and Forest Service concession operations accounted for about 90 percent of the revenue; (5) because the laws did not specify the calculation of fees to the government, the agencies developed their own varying approaches; and (6) those various calculation approaches resulted in concessioners paying different fees for similar activities. GAO believes that, to more effectively manage their concession programs, the six agencies need to develop and analyze complete data on their concession agreements, including data and the financial worth and non-fee compensations. Changes Needed in the Forest Service’s Recreation Program. GAO/T-RCED-91-10. February 26, 1991. consider funding levels, the number of sites for development and maintenance, and the revision of maintenance standards to develop its strategy. Forest Service Wilderness Management Funding. GAO/T-RCED-91-11. February 26, 1991. BACKGROUND: GAO discussed the Forest Service’s funding for managing wilderness areas. GAO noted that: (1) the Service reprogrammed funds Congress designated for wilderness management to other programs without the appropriate congressional approval; (2) of the $44.9 million Congress designated for wilderness management in fiscal years (FY) 1988 through 1990, the Service only spent $28.3 million on wilderness activities; (3) from FY 1989 through 1990, Service wilderness management expenditures decreased by 4 percent despite a 20-percent increase in funds designated for such purposes; (4) the Service could not specifically identify the activities for which it used most of the reprogrammed funds; (5) as of January 31, 1991, the Service planned to expend only $9.7 million of the $22.6 million designated for wilderness management; (6) in FY 1988 through 1990, the Service spent $17.8 million on such activities as measuring and controlling recreational use, constructing and maintaining signs and facilities, overseeing livestock grazing, and administering outfitter and permit programs; (7) in FY 1988 through 1990, the Service spent $10.5 million for headquarters oversight, regional office planning, and forest supervision; and (8) Service district offices spent substantially less on wilderness management in 1990 than in 1989. Financial Management: National Park Service Implements New Accounting System. GAO/AFMD-91-10. February 13, 1991. that its system’s costs do not increase beyond its needs for effective operation, NPS uses sufficient staff to operate the system, and system documentation is adequate. Parks and Recreation: Resource Limitations Affect Condition of Forest Service Recreation Sites. GAO/RCED-91-48. January 15, 1991. BACKGROUND: Pursuant to a congressional request, GAO assessed the Forest Service’s maintenance of developed recreation sites, focusing on the: (1) extent and causes of the maintenance and reconstruction backlogs; (2) Service’s site inventory system; and (3) effects of resource limitations on site maintenance. FINDINGS: GAO found that: (1) aging facilities, increased public use, and public demand for new or modernized facilities contributed to the maintenance and reconstruction backlogs at developed recreation sites; (2) funding and staffing levels failed to adequately meet daily operation and maintenance needs; (3) the deferral of needed maintenance resulted in such health and safety hazards as contaminated drinking water, disintegrating boat ramps, and unstable stairs and bridges; (4) without routine maintenance, the environmental damage caused by natural forces and human use and vandalism could accelerate site deterioration; (5) the Forest Service lacked a reliable system to monitor or report site maintenance and reconstruction needs; (6) the extent to which districts documented site conditions and backlog data varied widely, with 12 of the 20 districts unable to provide current and accurate backlog data; (7) the Service planned to implement a new recreation information management system to collect backlog data in 1991, but the new system’s usefulness was questionable because of the sources and types of data intended for the system; (8) between 1986 and 1989, the Service closed 4 percent of the 12,915 sites that existed in 1986, but total recreation site capacity increased between 1972 and 1987; (9) resource limitations contributed to reduced or eliminated services and reduced quality of the recreational experience; and (10) to compensate for limited funds and staff, the districts used such other means as volunteer and cost-share programs to help operate and maintain developed recreation sites. Estimated Costs To Recover Protected Species. GAO/RCED-96-34R. December 21, 1995. BACKGROUND: Pursuant to a congressional request, GAO provided information on species protected under the Endangered Species Act, focusing on the: (1) costs and time needed to recover selected species; and (2) Fish and Wildlife Service’s (FWS) and the National Marine Fisheries Service’s (NMFS) species recovery plans. GAO noted that: (1) the 58 species recovery plans reviewed contained cost estimates that were not based on rigorous scientific analyses; (2) total costs estimates ranged from $145,000 to $153.8 million and initial-years costs estimates ranged from $57,000 to $49.1 million; (3) cost estimates for high-priority recovery actions varied widely; (4) while FWS and NMFS expect to achieve their species recovery goals after the year 2000, one recovery plan is expected to take more than 100 years; (5) other federal agencies, state and local governments, and certain private parties are expected to participate in many FWS and NMFS recovery actions; (6) FWS and NMFS officials believe that recovery cost estimates alert various governmental and private entities to the possible range of costs and tasks needed for species recovery; (7) recovery cost estimates include actions that may not be taken because of a lack of funding or are no longer needed; and (8) FWS and NMFS believe that high-priority species will require more expenditures and the estimated recovery costs contained in the 58 plans reviewed are not representative of the cost estimates contained in all approved recovery plans. Wildlife Protection: Fish and Wildlife Service’s Inspection Program Needs Strengthening. GAO/RCED-95-8. December 29, 1994. ABSTRACT: Growing demand throughout the world for wildlife and wildlife parts, ranging from rhino horns to bear gall bladders, now threatens some wildlife populations. Although the full extent of illegal trade is unknown, the value of such trade into and out of the United States is estimated at up to $250 million annually. Despite recent increases in the Fish and Wildlife Service’s (FWS) wildlife inspection program, the program has had difficulty in accomplishing its mission of monitoring wildlife and intercepting wildlife trade. Given current budget constraints and downsizing within the federal government, increases in program funding are unlikely. GAO raises questions about the program’s efficiency and effectiveness. The passage of the North American Free Trade Agreement is likely to increase wildlife trade among the countries that are party to the agreement—the United States, Canada, and Mexico. The expected rise in trade will increase the workload of FWS inspectors, who are already stretched thin along the U.S. borders. Wildlife inspectors, federal agency officials, and conservation and trade groups cited advantages and disadvantages to transferring FWS’ wildlife inspection program to the Customs Service. Endangered Species Act: Information on Species Protection on Nonfederal Lands. GAO/RCED-95-16. December 20, 1994. ABSTRACT: Congress is considering reauthorization of the Endangered Species Act. GAO was asked to obtain information on the efforts of the Fish and Wildlife Service to protect species on nonfederal lands. A predominant number of the species protected under the Act have the major share of their habitat on nonfederal lands. Specifically, of the 781 listed species for which the Fish and Wildlife Service was responsible as of May 1993, 712 (or over 90 percent) have habitat on nonfederal lands and of these, 516 have over 60 percent of their total habitat on nonfederal lands. Two processes authorized under the Act have addressed potential conflicts between the effort to protect species and land use activities on nonfederal lands. The implementation of these processes has resulted in nonfederal landowners altering their planned or ongoing activities in various ways to minimize and/or mitigate their potential impact on endangered species. In addition, the Fish and Wildlife Service and others have initiated legal action to protect species. National Wildlife Refuge System: Contributions Being Made to Endangered Species Recovery. GAO/RCED-95-7. November 14, 1994. ABSTRACT: Of the nearly 900 species listed under the Endangered Species Act, one quarter can be found on national wildlife refuges. These listed species include plants, birds, and mammals. Although a significant portion of the current habitat for 94 listed species is on 66 wildlife refuges, many other listed species use refuge lands on a temporary basis for breeding or migratory rest stops. Refuges and refuge staff contribute to the protection and the recovery of listed species in several ways. First, the refuges themselves represent about 91 million acres of secure habitat, including more than 310,000 acres that have been acquired by the Service specifically for the protection of listed species. Second, refuge staff are taking steps to protect and recover listed species. Third, refuge staff, by identifying specific actions that can help a species recover, help to develop recovery plans that the Fish and Wildlife Service requires for listed species. Funding limitations constrain efforts to manage wildlife refuges. Two 1993 Interior Department reports found that available funding was not enough to meet established objectives for refuges because the level of funding has not kept pace with the rising costs of managing existing refuges. California Fire Response. GAO/RCED-94-289R. August 31, 1994. BACKGROUND: Pursuant to a congressional request, GAO provided information on the late 1993 California fires, focusing on: (1) federal airtankers’ response to the fires; (2) the adequacy of funding for the Soil Conservation Service’s Emergency Watershed Protection Program to mitigate the damage from the fires; and (3) the use of California’s FIRESCOPE Program as a national model for disaster response. GAO noted that: (1) firefighters used 39 federal airtankers to suppress the fires; (2) the California National Guard responded within the 24-hour readiness requirement; (3) the Economy Act provided sufficient flexibility for the use of federal funds to activate modular airborne firefighting systems, since commercial resources could not be provided in a timely manner; (4) the Emergency Watershed Protection Program appeared to provide sufficient funding for erosion prevention projects; (5) nonpriority projects are planned for completion by December 1994; (6) California has continually worked with other government entities to develop and implement well-defined emergency response procedures for recurring wildfires and has expanded the FIRESCOPE program to respond to other natural and manmade disasters; and (7) the Federal Emergency Management Agency and other state and local agencies already use FIRESCOPE as a model for disaster response. Endangered Species: Federal Actions to Protect Sacramento River Salmon. GAO/RCED-94-243. August 15, 1994. ABSTRACT: During the past 15 years, the population of winter-run chinook salmon returning to spawn in the Sacramento River has declined by 99 percent. The salmon was classified as an endangered species in January 1994. As a result of this listing, the National Marine Fisheries Services must advise federal agencies on how to modify actions that could harm the salmon and must enforce the Endangered Species Act’s provisions prohibiting the “taking” of salmon. This report identifies major actions that the Service has taken to protect the salmon. These actions affected the Central Valley Project and nonfederal irrigation districts that divert water from the Sacramento River. Research Fleet Modernization: NOAA Needs to Consider Alternatives to the Acquisition of New Vessels. GAO/RCED-94-170. August 3, 1994. ABSTRACT: The National Oceanic and Atmospheric Administration (NOAA) in the Commerce Department operates a fleet of 18 ships that supports its programs in fisheries and oceanographic research, and hydrographic charting and mapping. Because the fleet is old and technologically obsolete, NOAA has concluded that fleet replacement and modernization are critical to supporting its mission requirements. In this report on the cost-efficiency, accounting, and operating practices of NOAA vessels compared with other federal and private research vessels, GAO found that NOAA has generally agreed with previous studies that it experiment with contracting and chartering the services of private vessels as an alternative to acquiring new ships. NOAA’s current fleet modernization plan, however, focuses on the acquisition of new vessels and does not fully consider the role that contracted and chartered vessels could play. Because NOAA does not have the data it needs to adequately assess whether use of private ships could meet its needs, the agency has no assurance that its fleet modernization plans are the most cost-effective means of meeting future program requirements. Endangered Species Act: Impact of Species Protection Efforts on the 1993 California Fire. GAO/RCED-94-224. July 8, 1994. ABSTRACT: In October 1993, a wildfire near Riverside, California, raged over about 25,000 acres—an area more than one-half the size of the District of Columbia. The wildfire destroyed 29 homes. Some homeowners later alleged that the loss of some homes was caused by the Interior Department’s regulations protecting the Stephens’ kangaroo rat, an endangered species. Specifically, the homeowners claimed that prohibitions against “disking” for weed abatement—an annual process of reducing the amount of vegetation around homes to protect homes from wildfires—prevented them from saving their property. This report reviews (1) the development and application of the disking prohibition; (2) the nature of the fire and the resulting damage to homes; (3) the relationship, if any, between the disking prohibition and the loss of homes; and (4) any developments on the disking prohibition that have occurred since the fire. Pacific Whiting Harvest: Controversy Surrounding 1993 Allocation Between Processing Sectors. GAO/RCED-94-122. May 10, 1994. ABSTRACT: The 1993 Pacific whiting harvest was controversial. The Department of Commerce rejected the Pacific Fishery Management Council’s proposed allocation of the whiting harvest between the shoreside and at-sea processing sectors. The Council had proposed that up to 74 percent of the 1993 harvest be allocated to those fishing vessels delivering their catch to shoreside processors and that the rest be made available to vessels delivering their catch to at-sea processors. After much deliberation, Commerce—one day before the opening of the 1993 fishing season—approved an allocation of 30 percent to the shoreside sector and 70 percent to the at-sea sector. GAO concludes that the allocation decision for the 1993 Pacific whiting harvest was made in accordance with federal agency decision-making procedures and regulations. Commerce rejected the Council’s recommendation because of inadequate support. The timing of the decision, which differed little from the timing of the 1992 decision, was the result of the considerable time that federal officials spent deliberating the Council’s proposed shift in the 1993 allocation between the two processing centers. Ocean Research Vessels: NOAA Fleet Modernization Plan. GAO/T-RCED-94-52. October 21, 1993. ABSTRACT: The National Oceanographic and Atmospheric Administration’s (NOAA) $1.9 billion fleet modernization plan calls for acquiring 24 new or refurbished vessels during a 15-year period. Several reports, including those from GAO and the Department of Commerce, have encouraged NOAA to use more private sector ship services to cut costs. So far, however, NOAA has used contracting on a very limited basis, and its fleet modernization plan makes few provisions for vessel contracting. NOAA needs to experiment with contracting and leasing options to determine whether the private sector can effectively meet NOAA’s mission requirements. In experimenting with contracting, NOAA will need to grant contractors flexibility in how they do the work so that NOAA obtains the cost and operational data it needs to determine the extent that contracting can meet mission needs. Endangered Species: Public Comments Received on Proposed Listings. GAO/RCED-93-220BR. September 30, 1993. ABSTRACT: One issue likely to be debated during the reauthorization of the Endangered Species Act is whether a scientific peer review is needed of decisions to list species as endangered and threatened under the act. The act requires that these decisions be based on the best available scientific and commercial data, and peer review has been suggested as a way to ensure this. This briefing report provides information on the public comments that were provided in response to species’ being proposed for listing, the extent and nature of questions raised about the biological basis for the proposed listings, and the frequency of public hearings on proposed listings. GAO also discusses the number of petitions to list, delist, or reclassify species that the Fish and Wildlife Service found not merited. Protected Species: Marine Mammals’ Predation of Varieties of Fish. GAO/RCED-93-204. September 10, 1993. ABSTRACT: According to government officials, the hunting of steelhead salmon by California sea lions at Ballard Locks in Seattle, Washington, is the only documented case in which predation by one species is threatening the existence of another, although federal officials suspect that the adverse predation of fish by protected seals may also be occurring at the Columbia River and in the state of Maine. Efforts to counteract the predation at Ballard Lock, including relocating the sea lions and driving them away from the locks, have been unsuccessful. Other possible options include capturing and holding the sea lions during the steelhead’s migration and making structural changes to the locks and the accompanying spillway. The National Marine Fisheries Service has considered but rejected the possibility of controlling the sea lion population through lethal means. Natural Resources Restoration: Use of Exxon Valdez Oil Spill Settlement Funds. GAO/RCED-93-206BR. August 20, 1993. ABSTRACT: Under the civil settlement stemming from the 1989 grounding of the supertanker Exxon Valdez—the largest oil spill in U.S. history—Exxon agreed to pay a total of $900 million in 11 annual payments. Under the criminal settlement, Exxon was fined $150 million and required to pay $50 million each to the federal government and the state of Alaska to help restore areas damaged by the spill. This briefing report provides information on the amount and distribution of money that Exxon has paid through December 1992 under the settlements. GAO also discusses issues surrounding the functioning of the Trustee Council, which was created to coordinate damage assessments and to seek funds from responsible parties to clean up natural resources. GAO concludes that Alaska and federal trustees managing the oil spill settlement funds will have to address several issues before there can be confidence that the money is being spent for natural resources restoration and other intended purposes. Endangered Species: Factors Associated With Delayed Listing Decisions. GAO/RCED-93-152. August 5, 1993. ABSTRACT: Delays by the Fish and Wildlife Service (FWS) in listing six species as either threatened or endangered were due to several factors, the most common of which were FWS concerns about the sufficiency of biological data and concerns about potential economic and other impacts. GAO found that the conservation agreements for the Bruneau Hot Springsnail and the Jemez Mountains salamander were inconsistent with FWS policy and guidance. Whether a conservation agreement is an appropriate means of protecting species that would otherwise warrant listing is a decision for FWS to make. On the basis of its findings, however, GAO concludes that a conservation agreement, if it is to be an effective alternative to listing, should (1) address known threats to a species that would otherwise warrant listing, (2) provide for monitoring to ensure that the agreement’s mechanisms for protecting the species are properly and fully implemented, and (3) be implemented in a timely manner. Species Protection: National Marine Fisheries Service Enforcement Efforts. GAO/RCED-93-127BR. June 21, 1993. ABSTRACT: During 1991 congressional hearings, shrimp fishermen from the Gulf of Mexico complained that federal agencies were overly aggressive in enforcing regulations requiring turtle eluder devices, which create a hole in shrimp nets allowing trapped turtles to escape. This briefing report examines how enforcement practices under the Endangered Species Act compare with the enforcement of other fisheries and marine protection laws. GAO presents statistical data on the level of federal agencies’ enforcement efforts and penalties assessed to enforce four major fisheries and marine species protection laws in the southeastern United States. Wetlands Protection: The Scope of the Section 404 Program Remains Uncertain. GAO/RCED-93-26. April 6, 1993. ABSTRACT: The environmental benefits of swamps, marshes, and bogs—long considered fit only for draining and filling—are increasingly recognized today. Wetlands provide vital habitat for wildlife as well as improve water quality and control soil erosion. How to protect these areas has become a major regulatory issue in the 1990s. Under the Section 404 program, the U.S. Army Corps of Engineers is in charge of granting permits to anyone wanting to dredge and fill in navigable waters, including wetlands. GAO made several suggestions in a July 1988 report (GAO/RCED-88-110) on how the Corps could improve program management. This report discusses (1) the extent to which the Corps has acted on GAO’s recommendations, (2) legislative and other developments that have occurred since the 1988 report that affect the program, and (3) the extent to which budgetary constraints have affected program administration. Endangered Species: Potential Economic Costs of Further Protection for Columbia River Salmon. GAO/RCED-93-41. February 23, 1993. ABSTRACT: Despite federal and regional outlays of more than $1.3 billion to improve salmon runs in the Columbia River Basin, certain salmon stocks—especially those that spawn far upstream in the Snake River and its tributaries—have reached critically low levels. As a result, the Snake River sockeye salmon was designated an endangered species in 1991, while the Snake River fall chinook and spring/summer chinook were listed as threatened species the following year. In looking into the potential economic costs and effectiveness of efforts to protect these salmon stocks, GAO found that a preliminary estimate of lost jobs due to salmon protection will be unavailable until mid-1993 at the earliest. However, preliminary estimates of the value of goods and services foregone—a measure of net economic costs—suggest that the economic costs of salmon protection may range from $2 million to as high as $211 million annually. According to the more than 300 agencies and organizations GAO contacted, no studies address how effective any of the proposed protection measures may be in increasing the number of adult salmon returning to spawn. Past evaluation of measures to maintain and improve salmon runs either did not address the issue or were inconclusive. Wildlife Management: Many Issues Unresolved in Yellowstone Bison-Cattle Brucellosis Conflict. GAO/RCED-93-2. October 21, 1992. ABSTRACT: Montana succeeded in eradicating brucellosis from its cattle herds in 1985, allowing Montana ranchers to ship their cattle to other states without first testing them for the disease. Cattlemen are concerned about the possibility that brucellosis, a contagious disease that can cause abortions and infertility in domestic cattle, may be spread from Yellowstone Park’s free-roaming bison and elk herds to livestock grazing along the park borders, thereby jeopardizing Montana’s ability to freely transport cattle across state lines. Although its policy is not to restrict the movement of the park’s bison and elk, the National Park Service has, in an attempt to reduce the risk of brucellosis transmission, killed more than 10,000 bison that have wandered out of the park in recent years. This report provides information on the (1) scientific evidence that brucellosis can be transmitted from bison and elk to domestic cattle, (2) economic damage that might arise from such transmission, and (3) management alternatives for preventing or reducing the likelihood of such transmission. Natural Resources Protection: Reelfoot Lake Lease Terms Met, but Lake Continues to Deteriorate. GAO/RCED-92-99. August 17, 1992. ABSTRACT: Under an agreement signed in 1941, the Fish and Wildlife Service assumed responsibility for maintaining Reelfoot Lake, the largest natural lake in Tennessee, including controlling siltation and the growth of undesirable vegetation. Because the lake, which is used extensively by fishermen, boaters, and wildlife enthusiasts, captures drainage from adjacent eroding cropland, it has been silting up over the years and is increasingly swampy in areas; today, more than 40 percent of the lake is three feet deep or less. This report (1) discusses the extent to which the Fish and Wildlife Service has complied with terms of the lease agreement and (2) identifies the main causes of the lake’s deterioration, options for improving the lake’s condition, and barriers to implementing these options. Coastal Barriers: Development Occurring Despite Prohibition Against Federal Assistance. GAO/RCED-92-115. July 17, 1992. ABSTRACT: Coastal islands buffer the U.S. mainland from hurricanes and are an important source of habitat for fish and wildlife, including some endangered species. More and more islands, despite being highly unstable, are being developed because of their natural beauty and the dwindling supply of beachfront property. This development has also been spurred by the availability of national flood insurance and other federal assistance. Congress, in an effort to cut down on environmental damage and the government’s exposure to losses from storm damage, passed legislation a decade ago that prohibits new federal financial assistance on most coastal islands. Although this legislation has discouraged development on some coastal islands and other islands are unlikely to be developed any time soon because they are either inaccessible or unsuitable for building, significant development has occurred since 1982 in some attractive and accessible islands. Extensive new development can be expected in these and similar areas in the future. Most federal agencies have not provided new financial assistance for the coastal islands. Two exceptions involve the Federal Emergency Management Agency, which underwrote flood insurance obtained by ineligible property owners, and the Air Force, which granted an easement on land within Florida’s Eglin Air Force Base at no cost to a quasi-state agency that wanted to build a bridge to one of the coastal islands. GAO also discovered that permits issued by agencies such as the U.S. Army Corps of Engineers have allowed development on certain coastal islands. Endangered Species: Contract Funding For Selected Species. GAO/RCED-92-218. July 17, 1992. ABSTRACT: GAO looked at whether individuals or groups that petition the Fish and Wildlife Service to put plants and animals on the endangered species list later receive agency funds to study those same plants and animals. According to Fish and Wildlife Service officials, agency contracting policies do not prohibit petitioners from receiving Endangered Species Act funding to study the same species for which they have submitted petitions. Of the 228 contracts for studying endangered species that GAO examined, 38 had been awarded to study the same species covered by the petitions. But in only one case was a petitioner associated with a Fish and Wildlife Service award. In this instance, the principal investigator for the organization receiving funding was the same person who had petitioned for the species to be placed on the endangered species list. Endangered Species: Past Actions Taken to Assist Columbia River Salmon. GAO/RCED-92-173BR. July 13, 1992. ABSTRACT: Concerns about declining populations of wild salmon prompted the National Marine Fisheries Service to list several kinds of Snake River salmon as either endangered or threatened species. This briefing report examines past efforts to reverse declines in salmon runs. GAO discusses the actions, and their costs, that federal agencies and organizations in the Pacific Northwest have taken to maintain and restore runs of salmon—both wild and hatchery-bred. GAO also discusses the results of studies and research on the effectiveness of the salmon recovery measures undertaken. Hydroelectric Dams: Proposed Legislation to Restore Elwha River Ecosystem and Fisheries. GAO/T-RCED-92-80. July 9, 1992. BACKGROUND: GAO discussed the Elwha River Ecosystem and Fisheries Restoration Act, focusing on: (1) the Federal Energy Regulatory Commission’s (FERC) authority to license dams on the Elwha River; (2) the Department of the Interior’s position on removal of the dams to restore fisheries; and (3) who should pay the costs if the dams are removed. GAO noted that: (1) the Glines Canyon Dam is within the boundaries of a national park, where FERC does not have the authority to license dams; (2) Interior, FERC, and the National Marine Fisheries Service believe that removing both dams offers the best prospects for restoring the Elwha River fisheries and their surrounding ecosystem; and (3) the cost of removing the dams should be allocated among parties in proportion to the benefits they have received from the dams or will receive from the restoration of the river. Hydroelectric Dams: Interior Favors Removing Elwha River Dams, but Who Should Pay Is Undecided. GAO/RCED-92-168. June 5, 1992. ABSTRACT: The Department of the Interior’s position is that in order to restore fisheries in the Elwha River, two dams will have to be removed. As of May 1992, Interior has not worked out with the Federal Energy Regulatory Commission whether the dams should be removed and who should pay for the cost of removing them. Proposed legislation before Congress would involve federal acquisition of the two dams and subsequent comprehensive analysis of the most effective and reliable alternative for fully restoring, enhancing, and protecting the ecosystem, fisheries, and wildlife of the Elwha River basin. GAO believes that a better understanding of the estimated costs and potential liabilities would provide for more informed public policy decisions on whether and how best to restore the ecosystem and fisheries of the Elwha River and who should be responsible for paying the costs of restoration. GAO summarized this report in testimony before Congress; see: Hydroelectric Dams: Proposed Legislation to Restore Elwha River Ecosystem and Fisheries, by Keith O. Fultz, Director of Planning and Reporting in the Resources, Community, and Economic Development Division, before subcommittees of the House Committee on Merchant Marines and Fisheries. GAO/T-RCED-92-80, July 9, 1992 (10 pages). Endangered Species Act: Types and Number of Implementing Actions. GAO/RCED-92-131BR. May 8, 1992. ABSTRACT: This briefing report examines how two federal agencies—the Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS)—have implemented the Endangered Species Act of 1973, which sets forth processes for protecting plants and animals. Habitat designation has taken place for less than 20 percent of the species listed as endangered. Agency officials doubt whether designating critical habitats provides much additional benefit for a species, and critical habitat designation is considered a low priority. During fiscal years 1987 through 1991, when other federal agencies asked FWS or NMFS to consider the effect of proposed actions such as construction on a listed species, the two agencies allowed such projects to proceed as planned more than 90 percent of the time. While more than 650 domestic species are on the endangered species list, 600 others are recognized by the agencies as potentially imperiled. At the present pace of listing, it will take FWS until 2006 to list these species as endangered or threatened. Compounding this problem are the estimated 3,000 additional species that may be threatened or endangered in the future. The agencies attribute their slowness to resource constraints. Great Lakes Fishery Commission: Actions Needed to Support an Expanded Program. GAO/NSIAD-92-108. March 9, 1992. ABSTRACT: Sea lampreys, eel-like parasites that prey on fish, are native to the Atlantic Ocean but gained entry to the Great Lakes through the Erie Canal in the late 19th century. In response to concerns about decimated fish stocks, the Great Lakes Fishery Commission was created in 1955 to check the sea lamprey population. This report discusses (1) whether the Commission, a joint U.S.-Canadian venture, uses an ecosystem management approach that considers the potential harmful effects of sea lamprey control efforts; (2) what progress the Commission has made in adopting nonchemical methods to control the sea lamprey; and (3) if the Commission could effectively spend more funding on research for alternative control methods. Wildlife Protection: Enforcement of Federal Laws Could Be Strengthened. GAO/T-RCED-92-26. February 3, 1992. ABSTRACT: Federal statutes and international treaties give the Department of the Interior’s Fish and Wildlife Service adequate authority to protect wildlife. The Migratory Bird Treaty Act does not, however, give the Service the authority to conduct a search and seizure without a warrant, as do other laws protecting wildlife. GAO continues to believe that it would enhance the Service’s enforcement authority if the act were amended to provide such search and seizure authority. The Service investigates more than 10,000 suspected violations each year and maintains a conviction rate averaging more than 90 percent for cases prepared for prosecution. The agency cannot, however, investigate many more suspected violations or respond to state requests to participate in certain investigations because (1) it has a limited number of agents and (2) many of these agents are deskbound for months at a time due to insufficient operating funds. Further, the Service lacks readily available information on suspected violations and other enforcement activities that could help to justify needed resources. Although Interior is developing an information system capable of recording suspected crimes against wildlife, it also needs to (1) ensure that its agents report all known or suspected violations, whether they are investigated or not, and (2) document all state requests for assistance. This information should then be used to substantiate the resources the Service needs to carry out its law enforcement activities effectively. Natural Resources Damage Assessment: Information on Study of Seabirds Killed by Exxon Valdez Oil Spill. GAO/RCED-92-22. November 27, 1991. ABSTRACT: In the wake of the March 1989 Exxon Valdez oil spill in Alaska’s Prince William Sound, a federally funded study sought to estimate the number of seabirds killed as a result of the accident. This was one of more than 50 damage assessment studies that sought to determine the impact of the spill on natural resources and develop a restoration strategy. The most controversial aspect of the seabird study involved killing 219 seabirds, immersing them in oil, placing them in Prince William Sound, and tracking their drift patterns to discover the number of birds recovered versus the number lost at sea. This report provides information on (1) the request and approval of the seabird damage study and (2) the study’s methodology, which required killing more than 200 seabirds. Wetlands Overview: Federal and State Policies, Legislation, and Programs. GAO/RCED-92-79FS. November 22, 1991. ABSTRACT: In recent years, the value of wetlands—such as providing fish and wildlife habitat and abating erosion—have become better known. Unfortunately, an estimated 50 percent of all wetlands in the lower 48 states have already been filled or drained, and another 290,000 acres are being lost annually to agriculture and development. This fact sheet provides an overview of federal and state wetlands-related policies, legislation, and programs. Wetlands Preservation: Easements Are Protecting Prairie Potholes but Some Improvements Are Possible. GAO/RCED-92-27. November 7, 1991. ABSTRACT: Wetlands protected under the Small Wetlands Acquisition Program are located mainly in the Prairie Pothole Region in the upper Middle West, including parts of Montana, the Dakotas, Iowa, and Minnesota. Prairie potholes are shallow, freshwater depressions and marshes that were created by glaciers thousands of years ago. Loss of such habitat is a major reason why populations of some duck species, such as mallards and pintails, have declined about 60 percent over the past 50 years. The Small Wetlands Acquisition Program has successfully helped preserve wetlands in the Prairie Pothole Region, primarily because the Fish and Wildlife Service has effectively enforced easements on wetlands. GAO believes that the program could be made even better if the Fish and Wildlife Service were to correct weaknesses in the (1) documentation of waterfowl’s use of wetlands under easement and (2) guidance involving the timeliness with which damaged wetlands are restored and the circumstances under which violators should be issued notices and assessed fines. Wilderness Management: Accountability for Forest Service Funds Needs Improvement. GAO/RCED-92-33. November 4, 1991. ABSTRACT: To help ensure that Forest Service wilderness areas are protected and maintained in their natural state, Congress increased funding for wilderness management by almost 80 percent during fiscal years 1989 through 1991. The Forest Service, however, diverted more than one-third of the $44.7 million designated for wilderness management to other activities. Of the $28.3 million spent on wilderness management, $10.5 million was used for management expenses—mainly salaries and administrative costs—at organizational levels above the district offices, with the remainder spent on wilderness management at the district level. The Forest Service reported that 112 of the 500 district offices managing wilderness areas saw cuts in funding for fiscal year 1990, including some offices that had earlier reported funding and staffing shortfalls. Contrary to congressional directives, the Forest Service reprogrammed these funds without seeking prior approval by the House Committee on Appropriations. The head of the Forest Service recently outlined several steps to ensure that (1) designated funds are spent as Congress intended, (2) the Committee’s reprogramming procedures are followed, and (3) there is greater accountability over funds designated for wilderness management. In addition, GAO suggests that the Forest Service refine its accounting for expenditures and establish output targets to improve accountability over expenditures of wilderness management funds and the performance of wilderness managers. Oil Reserve: Impact of NPR-1 Operations on Wildlife and Water Is Uncertain. GAO/RCED-91-129. August 1, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the basis for the disagreements between the Department of Energy (DOE) and its Argonne National Laboratory relating to Argonne’s development of a supplemental environmental impact statement (SEIS) for Naval Petroleum Reserve No. 1 (NPR-1), focusing on: (1) the DOE Naval Petroleum Reserves-California (NPRC) and Argonne positions on NPR-1 impacts on endangered species and groundwater quality and how SEIS would discuss those uncertainties; and (2) NPR-1 compliance with environmental laws and regulations governing endangered species, wastewater disposal, and historic preservation activities. FINDINGS: GAO found that: (1) between 1981 and 1989, the number of foxes living free within the NPR-1 study area decreased from 164 to between 44 and 58; (2) Argonne concluded in a SEIS draft that NPR-1 operations could have contributed to the decline of foxes in that area; (3) NPRC and Argonne staffs disagreed about how SEIS should describe the effects of NPR-1 operations on endangered foxes and nearby groundwater, primarily due to a lack of definitive data; (4) in September 1990, NPRC notified Argonne that DOE would prepare final SEIS, but it was unclear to what extent DOE would use Argonne’s data and views; (5) DOE and others were conducting research that could provide additional data on factors affecting the fox population and wastewater migration; (6) DOE has not ensured that NPR-1 operations comply with the Endangered Species Act and the National Historic Preservation Act’s regulations; (7) Argonne concluded in a June 1990 SEIS draft that NPR-1 operations violated California wastewater disposal requirements for sumping, but DOE believed that NPR-1 had not violated the requirements, and the state had not made a determination on that issue; (8) factors contributing to the noncompliance included NPRC officials’ lack of knowledge regarding environmental requirements, noncoordination with federal and state agencies having environmental responsibilities, and mismanagement, which could result in legal action, fines, or a temporary shutdown; and (9) NPRC is taking action to address the problems, but unless DOE improves its management controls, similar problems may continue to exist. National Forests: Funding Fish and Wildlife Projects. GAO/RCED-91-113. June 12, 1991. BACKGROUND: Pursuant to a congressional request, GAO provided information on funds spent by various sources for fish and wildlife activities on national forest lands. FINDINGS: GAO found that: (1) between October 1987 and June 1990, fish and wildlife activities involving Forest Service staff participation totalled over $202 million; (2) such activities included revegetation of streamside areas, fencing installation, and erosion control projects to maintain or improve fish and wildlife habitat or provide for the recovery of endangered species; (3) of the $202 million, $154.6 million came from congressional appropriations to the national forest system and the remaining $47.8 million came from such outside sources as state and local governments; (4) from fiscal year (FY) 1988 through FY 1989, outside funding for fish and wildlife activities directly involving Service staff increased from about $14.7 million to about $16.7 million and totalled about $16.4 million for the first 9 months of FY 1990; and (5) financial support from outside sources included $32.1 million in cost-sharing arrangements between the Service and outside sources, $15.7 million in work performed by the Service but paid for by outside sources, and $14.7 million for activities in which the Service was not involved. Wildlife Protection: Enforcement of Federal Laws Could Be Strengthened. GAO/RCED-91-44. April 26, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed whether: (1) federal statutes and international treaties provided sufficient authority to protect wildlife, particularly migratory waterfowl; and (2) the Department of the Interior’s Fish and Wildlife Service (FWS) adequately enforced those statutes and treaties. FINDINGS: GAO found that: (1) with the exception of the Migratory Bird Treaty Act and the Endangered Species Act, the 11 federal statutes and 5 international treaties provided sufficient enforcement authority for FWS; (2) the lack of warrantless search and seizure authority in the Migratory Bird Treaty Act hampered agents’ efforts to investigate suspected violations; (3) the issue of whether hybrid species were protected under the Endangered Species Act of 1973 presented enforcement problems, since the only alternative to conclusively prove an animal’s species was to destroy and examine it; (4) although new and amended legislation substantially increased FWS responsibilities for protecting species, the number of FWS special agents decreased by 9 percent; (5) due to insufficient funds, some special agents were deskbound and unable to perform their basic responsibilities for months at a time; (6) staffing and funding shortfalls resulted in the selective enforcement of wildlife protection legislation; (7) FWS lacked adequate information regarding the extent of suspected crimes it was unable to investigate and the effectiveness of its law enforcement methods; and (8) joint FWS-state investigations of large-scale illegal commercial operations and massive illegal harvesting of waterfowl worked well, but reductions in FWS staffing and operating funds, coupled with its focus on large-scale operations, rendered FWS unable to respond to many state requests for assistance. Fisheries: Commerce Needs to Improve Fisheries Management in the North Pacific. GAO/RCED-91-96. March 28, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed: (1) federal management of the groundfish fishery in the Bering Sea and the Gulf of Alaska; (2) systems for calculating domestic processing capability; and (3) systems for releasing surplus allocations to joint-venture fishermen. FINDINGS: GAO found that: (1) the North Pacific Fishery Management Council’s recommended 2-million metric ton cap for groundfish in the Bering Sea was conservative, based on the National Marine Fisheries Service’s (NMFS) 1984 estimate; (2) the Council maintained the conservative cap to Americanize the fishery, protect markets for groundfish, and sustain the ecological balance; (3) domestic processors provided NMFS with preseason estimates which were 43 percent higher than actual use, and NMFS believed that estimates were inflated primarily to limit or eliminate allocations to joint-venture and foreign fishermen; (4) the system for allocating groundfish often gave domestic processors larger initial allocations than they needed and reduced the allocations to joint-venture and foreign fishermen; and (5) joint-venture and foreign fishing in the North Pacific fishery was eliminated when all allocations of groundfish in the Gulf of Alaska and Bering Sea went to domestic processors in 1990 and 1991. Coast Guard: Millions in Federal Costs May Not Be Recovered From Exxon Valdez Oil Spill. GAO/RCED-91-68. March 5, 1991. BACKGROUND: Pursuant to a congressional request, GAO provided information on the Exxon Valdez oil spill, focusing on: (1) the total spill-related costs reported as of June 30, 1990; (2) the extent of the oil carrier company’s reimbursement to the government for spill-related costs through September 30, 1990; and (3) improvements needed in the reimbursement process in the event of future spills. FINDINGS: GAO found that: (1) the federal government, as of June 30, 1990, spent almost $154 million on the spill, for which the carrier reimbursed it or was processing reimbursement of $123 million; (2) through June 30, 1990, 10 federal agencies reported spending $116.9 million for removal, $22.6 million for damage assessment, and $14.2 million for other spill-related costs; (3) 4 of those agencies accounted for 87 percent of total costs incurred, and the Department of Defense accounted for $62.2 million, the largest portion; (4) as of September 30, 1990, the oil carrier company reimbursed the federal government for $116.1 million of the $153.7 million agencies reported they spent on the spill; (5) the Coast Guard’s spill coordinator did not authorize for reimbursement a number of the agencies’ activities, since it did not believe that they were related to oil removal; (6) several agencies lost opportunities to obtain reimbursement from the Oil Spill Liability Trust Fund because of problems in tracking and billing their spill-related costs completely and accurately; (7) agencies estimated that future cleanup activities would require at least another $26 million; and (8) the Department of Justice was considering civil litigation against the oil carrier company to recover damage assessment and restoration costs. Animal Damage Control Program: Efforts to Protect Livestock from Predators. GAO/RCED-96-3. October 30, 1995. ABSTRACT: Efforts to protect livestock from predators, mainly coyotes, constitute the major activity of the Agriculture Department’s Animal Damage Control Program. In 1994, more than 100,000 predators were killed by the program’s field personnel. GAO found that Agriculture field personnel in California, Nevada, Texas, and Wyoming used lethal methods in essentially all instances to control livestock predators. Agriculture’s written policies and procedures call for field personnel to give preference to the use of nonlethal methods when practical and effective. However, according to program officials, this aspect of written guidance does not apply to the control of livestock predators. These officials said that in controlling livestock predators, nonlethal methods, such as fencing and the use of herders and guard dogs, are more appropriately used by ranchers, have limited effectiveness, and are impractical for field personnel to use. Restoring the Everglades: Public Participation in Federal Efforts. GAO/RCED-96-5. October 24, 1995. learned about federal and nonfederal collaboration and consensus-building in South Florida that may be applicable elsewhere. Ecosystem Management: Additional Actions Needed to Adequately Test a Promising Approach. GAO/T-RCED-94-308. September 20, 1994. ABSTRACT: The “ecosystem” approach to managing the nation’s lands and natural resources stresses that plant and animal communities are interdependent and interact with their physical environment to form ecosystems that span federal and nonfederal lands. GAO found that the four primary federal land management agencies—the National Park Service, the Bureau of Land Management, the Fish and Wildlife Service, and the Forest Service—have started to implement ecosystem management. In addition, the administration’s fiscal year 1995 budget request includes $700 million for ecosystem management initiatives. GAO recognizes that, compared with the existing federal approach to land management, ecosystem management may require greater flexibility in planning; in budgeting, authorizing, and appropriating funds; and in adapting management on the basis of new information. However, GAO believes that if ecosystem management implementation is to move forward, it must advance beyond unclear priorities and broad principles. Clear goals and practical steps for implementing ecosystem management need to be established and progress in implementing this approach needs to be regularly assessed and reported. Ecosystem Management: Additional Actions Needed to Adequately Test a Promising Approach. GAO/RCED-94-111. August 16, 1994. advance beyond unclear priorities and broad principles. Clear goals and practical steps for implementing ecosystem management need to be established and progress in implementing this approach needs to be regularly assessed and reported. GAO summarized this report in testimony before Congress; see: Ecosystem Management: Additional Actions Needed to Adequately Test a Promising Approach, by James Duffus III, Director of Natural Resources Management Issues, before the Subcommittee on Oversight and Investigations, House Committee on Natural Resources, the Subcommittee on Environment and Natural Resources, House Committee on Merchant Marine and Fisheries, and the Subcommittee on Specialty Crops and Natural Resources, House Committee on Agriculture. GAO/T-RCED-94-308, Sept. 20, 1994 (nine pages). Federal Land Management: Status and Uses of Wilderness Study Areas. GAO/RCED-93-151. September 23, 1993. ABSTRACT: In response to congressional concerns about the alleged degradation of areas being considered for possible inclusion in the National Wilderness Preservation System, this report provides information on the types and effects of activities in these areas, which are managed by the Bureau of Land Management and the Forest Service. GAO discusses (1) the legislative guidance and the agency policies governing wilderness study area management, (2) various activities and uses occurring in the agencies’ study areas, (3) ways these activities affect the areas, and (4) agency actions to monitor and restrict these uses and to repair resulting damage. Congress has allowed many different uses, such as primitive recreation and grazing, to occur in these areas. In the locations GAO visited, the effects and damage seemed to be concentrated in relatively small and accessible areas. Because people have various views on “wilderness,” they will also have different opinions about the severity of “man’s imprint” on potential and designated wilderness. The final decision about an area’s suitability for wilderness ultimately rests with Congress. Ranching Operations on Public Lands. GAO/RCED-93-212R. August 17, 1993. these ranching operations make to wildlife and improving the condition of the federal lands. GAO noted that: (1) one-third of the cattle in 11 western states graze at least part of the year on federal lands; (2) the extent that ranching operations are dependent on federal lands varies by state and region; (3) federal lands are generally of lower quality and not as productive as private and state lands; and (4) the operating size of many livestock operations is affected by the amount of federal range land available during seasons of feed shortage on privately-owned lands. Large Grazing Permits. GAO/RCED-93-190RS. July 16, 1993. BACKGROUND: Pursuant to a congressional request, GAO provided permit holders’ addresses and phone numbers missing from its listing of the top 500 Bureau of Land Management and Forest Service grazing permits. GAO noted that it could not provide the unlisted phone number of one permit holder and the other permit had been cancelled, sold, or transferred. Large Grazing Permits. GAO/RCED-93-190R. June 25, 1993. BACKGROUND: Pursuant to a congressional request, GAO provided information on the top 500 grazing permits issued by the Bureau of Land Management and the Forest Service. GAO noted that: (1) it could not provide some of the permittees’ addresses and phone numbers due to time constraints and unavailable information; (2) the information on the permits may not correspond to the actual livestock operators; (3) some operators hold more that one permit and some permits are issued to associations that represent many operators; and (4) some operators who do not hold one of the top permits may hold several smaller permits which raise their aggregate grazing level higher than that of an operator who holds one of the top 500 permits. Rangeland Management: Profile of the Forest Service’s Grazing Allotments and Permittees. GAO/RCED-93-141FS. April 28, 1993. grouped the allotment and permittee information into several categories, emphasizing the 500 largest and smallest allotments and permittees. In general, grazing allotments in the western United States were concentrated among the largest ranchers. The 500 largest allotments GAO studied encompassed more than 29 million acres, or about 32 percent of the total allotment acreage. In contrast, the 500 smallest allotments accounted for about 49,000 acres, or 0.05 percent of the total allotment acreage. Similarly, the 500 permittees with the highest livestock grazing levels accounted for nearly 4.5 million animal months, or nearly half of the total number of animal months. The 500 permittees with the lowest livestock grazing levels accounted for about 8,500 animal months, or 0.09 percent of the total number of animal months allowed. Rangeland Management: BLM’s Range Improvement Project Data Base Is Incomplete and Inaccurate. GAO/RCED-93-92. April 5, 1993. ABSTRACT: The Bureau of Land Management (BLM) spent about $18 million in fiscal years 1990 and 1991 to improve the public rangeland. These funds came from fees paid by ranchers to graze their livestock on BLM land. The law requires that the funds be used for projects such as fencing, weed control, and water development that benefit rangeland resources, including wildlife, watersheds, and livestock. This report discusses how range improvements are accounted for, including (1) the types of range improvement projects funded, (2) the cost of each project, and (3) the rangeland resources benefiting from these projects. GAO also provides information on the role that grazing advisory boards play in determining which range improvement projects are funded each year. Wilderness: Effects of Designation on Economy and Grazing in Utah. GAO/RCED-93-11. December 29, 1992. study’s methodology is flawed because, among other things, it inflates the total effects of wilderness designation by not discounting future cash flows and by double-counting projected lost revenues. The limitations of this study led GAO to conclude that the effect on Utah’s economy of designating more acreage as wilderness has not been adequately quantified. Likewise, the effect of wilderness designation on livestock grazing in Utah has not been quantified. Rangeland Management: Profile of the Bureau of Land Management’s Grazing Allotments and Permits. GAO/RCED-92-213FS. June 10, 1992. ABSTRACT: This fact sheet provides information on livestock grazing on public rangeland managed by the Department of the Interior’s Bureau of Land Management (BLM). GAO discusses (1) the number, the average acreage, and the average stocking rate of BLM allotments and (2) the total and the average number of animal unit months—the amount of forage needed to feed one 1,000-pound cow, a horse, or five sheep for a month—covered by grazing permits. GAO groups the information into several categories, emphasizing the 500 largest and 500 smallest allotments and permits. BLM Resource Allocation. GAO/RCED-92-181R. May 20, 1992. BACKGROUND: Pursuant to a congressional request, GAO provided information on the Bureau of Land Management’s (BLM) fiscal year (FY) 1991 and FY 1992 budget and staff allocations for nine western states in management programs addressing oil and gas, coal, rangeland, cultural resources, wilderness, recreation resources, and resource planning. GAO noted that: (1) the BLM resource allocation process, including its budget development phase, takes place over 3 fiscal years; (2) BLM state offices adjust current budgets for such factors as inflation, administrative priorities, and initiatives, to develop new budgets; (3) numerous BLM, Department of the Interior, and Office of Management and Budget officials review and revise the proposed budgets over the 3-year development period, as well as the President and Congress; (4) total BLM FY 1992 budget allocations for the nine states ranged from $29.8 million to $53.2 million; and (5) total BLM FY 1992 staff allocations ranged from 510 full-time equivalents (FTE) to 975 FTE. Rangeland Management: Results of Recent Work Addressing the Performance of Land Management Agencies. GAO/T-RCED-92-60. May 12, 1992. BACKGROUND: GAO discussed its work on public rangeland management, focusing on: (1) its response to a consultant’s critique of three GAO reports issued between 1988 and 1990 on rangeland management; and (2) other reports it has issued regarding rangeland monitoring and livestock grazing activity. GAO noted that: (1) the consultant made numerous criticisms about GAO reports regarding grazing allotments, riparian area restoration, and the federal wild horse program, but GAO believes that the critique includes little factual data to substantiate its assertions and misrepresents report findings to support its positions; (2) other federal and state agencies conducting similar studies reached conclusions that were similar to GAO conclusions; (3) both the Bureau of Land Management (BLM) and the Forest Service have taken actions to address the issues raised in the GAO reports; and (4) its reports on BLM and Service rangeland monitoring continue to indicate that neither agency has sufficient staffing and funding to effectively administer or evaluate grazing activities. Rangeland Management: Assessment of Nevada Consulting Firm’s Critique of Three GAO Reports. GAO/RCED-92-178R. May 4, 1992. Agencies, by J. Dexter Peach, Assistant Comptroller General for Resources, Community, and Economic Development Programs, before the Subcommittee on National Parks and Public Lands, House Committee on Interior and Insular Affairs. GAO/T-RCED-92-60, May 12 (10 pages). Contacts and Documents Reviewed. GAO/RCED-92-193R. May 4, 1992. ABSTRACT: GAO reviewed a January 1992 report by a Nevada consulting firm that critiqued three GAO reports on management of the western public rangeland by the Bureau of Land Management and the Forest Service. Subjects addressed included declining and overstocked grazing allotments, riparian area restoration, and the federal wild horse program. GAO carefully examined both the consulting firm’s analysis of GAO’s reports as well as GAO’s adherence to its own standards, policies, and procedures. GAO is confident that its work was done with due professional care consistent with generally accepted government auditing standards and that its findings are well supported, its conclusions flow logically from the facts, and its recommendations offer reasonable suggestions for addressing the problems identified. The first report provides GAO’s point-by-point responses to the charges made in the consulting firm’s report, while the second provides the titles of the documents GAO reviewed and the names of individuals GAO contacted in preparing its reports. GAO summarized these reports, along with two other recent reports on rangeland management (GAO/RCED-92-52, Feb. 24, 1992, and GAO/RCED-92-12, Nov. 26, 1991) in testimony before Congress; see: Rangeland Management: Results of Recent Work Addressing the Performance of Land Management Agencies, by J. Dexter Peach, Assistant Comptroller General for Resources, Community, and Economic Development Programs, before the Subcommittee on National Parks and Public Lands, House Committee on Interior and Insular Affairs. GAO/T-RCED-92-60, May 12 (10 pages). Grazing Fees: BLM’s Allocation of Revenues to Montana Appears Accurate. GAO/RCED-92-95. March 11, 1992. data are entered incorrectly and mistakes are not caught and corrected. Although GAO found several instances of inaccurate data entry, BLM had corrected them by the time of GAO’s review. With the formation of a committee to identify and implement edit-checks needed to refine its system, BLM has started to ensure greater accuracy of the information in the system. GAO believes that these efforts are worthwhile and should be continued. Management of Artwork: Steps Taken to Preserve and Protect Bureau of Reclamation’s Collection. GAO/RCED-92-92. February 28, 1992. ABSTRACT: In the late 1960s, the Department of the Interior’s Bureau of Reclamation commissioned artwork depicting its water projects in the West. Because of inadequate record-keeping and controls, the Bureau has been unable to locate about 40 percent of the paintings, watercolors, and sketches in its collection. Some of the missing artwork may have been lost or stolen, and other pieces may have been returned to the original artists. The Bureau has done what it can to identify and locate the missing pieces, and since 1987 it has strengthened its accountability and controls over the remaining 201 pieces of art. Few of these pieces have been seriously damaged, and the Bureau has begun restoring the most valuable among them. The Bureau has not yet decided, however, how best to display its collection in offices and public facilities or loan out pieces for exhibit after their restoration. Rangeland Management: Interior’s Monitoring Has Fallen Short of Agency Requirements. GAO/RCED-92-51. February 24, 1992. its ability to protect rangelands from grazing damage and to restore damaged lands because of insufficient funding and staff. Rangeland Management: BLM’s Hot Desert Grazing Program Merits Reconsideration. GAO/RCED-92-12. November 26, 1991. ABSTRACT: The debate over the effects of domestic livestock grazing are particularly important in the nations’ so-called hot deserts—the Mojave, the Sonoran, and the Chihuahuan—because of the fragile ecosystems there and the length of time it takes for damaged areas to recover. GAO concludes that current livestock grazing activity on Bureau of Land Management (BLM) allotments in hot desert areas risks long-term environmental damage while not generating enough revenues to provide for adequate management. According to recent data, the economic benefits derived from livestock grazing on BLM lands in the hot desert areas are minimal. The primary economic benefits accrue to about 1,000 livestock operators who hold livestock grazing permit in these areas. Yet many of these operators derive little income from ranching the public lands, who instead place a premium on the traditional lifestyle they are able to maintain via the permits. Conversely, other public land users value the use of desert lands for environmental preservation and recreation. GAO found that BLM lacks the staff needed to collect and evaluate data measuring the impact of livestock grazing on many desert allotments. Without these data, BLM is in no position to assess livestock usage of desert allotments and change usage as needed. Surface Mining: Management of the Abandoned Mine Land Fund. GAO/RCED-91-192. July 25, 1991. BACKGROUND: Pursuant to a congressional request, GAO examined: (1) the amount of Abandoned Mine Land (AML) funds the Office of Surface Mining Reclamation and Enforcement (OSMRE) and the Soil Conservation Service (SCS) expended for administrative costs for fiscal years (FY) 1985 through 1990; and (2) whether OSMRE and SCS funded reclamation projects in accordance with the priorities set forth in the Surface Mining Control and Reclamation Act of 1977 (SMCRA). construction grants, a precise figure on the amount of AML funds actually spent on administrative expenses is not readily discernible; (3) SCS estimated that it spent $6.6 million to administer the Rural Abandoned Mine Land Program (RAMP) between FY 1985 and 1990 and RAMP projects funded in this time generally fell under the two highest priority categories of the six set forth in SMCRA; (4) OSMRE spent about $137.3 million administering the overall AML program between FY 1985 and 1990; (5) states generally funded reclamation projects in accordance with SMCRA priorities and each participating state has its own OSMRE-approved ranking system to help guide project selection; and (6) OSMRE annual oversight reports found few major project selection problems during FY 1985 through 1990. Wildlife Management: Problems Being Experienced With Current Monitoring Approach. GAO/RCED-91-123. July 22, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Forest Service’s management indicator species approach to monitoring wildlife and their habitat in national forests, focusing on the cost-effectiveness and ultimate usefulness of this approach. FINDINGS: GAO found that: (1) although the management indicator approach is based on sound theory, several practical drawbacks exist which raise questions about whether data collected on selected species can provide the basis for drawing conclusions on overall habitat conditions; (2) the costs of monitoring indicator species populations were prohibitive, since the cost of monitoring increased as the population of the species being monitored decreased or as the size of the habitat increased; (3) even when planned data collection efforts were completed using the management indicator species approach to monitoring, the data had limited usefulness because they revealed population changes without conclusively relating observed changes to overall habitat conditions or Service management actions; (4) although Service headquarters officials acknowledge that problems exist in field implementation of the management indicator species approach, they believe that these difficulties stem more from the application of the management indicator species principle than from fundamental weaknesses with the concept itself; and (5) Service headquarters is revising its national direction on wildlife and wildlife habitat monitoring. Rangeland Management: Comparison of Rangeland Condition Reports. GAO/RCED-91-191. July 18, 1991. BACKGROUND: Pursuant to a congressional request, GAO followed up on its 1988 report on the Bureau of Land Management’s (BLM) and Forest Service’s rangeland management programs, comparing the conclusions and analyzing the findings of two studies conducted by BLM and the Natural Resources Defense Council (NRDC) on the condition of the public rangeland under BLM jurisdiction. FINDINGS: GAO found that: (1) although NRDC and BLM reports reached different conclusions on the overall condition of the public rangeland, they were not necessarily inconsistent with each other; (2) the different conclusions were attributable more to data interpretation and presentation than to differences in the data; (3) BLM based its conclusion that current range conditions are better than they have been in the past century on studies that lacked supporting documentation and used different methodologies; (4) had BLM calculated its percentages solely on the basis of the land for which it had condition information, as NRDC did, its percentage of rangeland in fair or poor condition would have increased to 61 percent, much closer to the NRDC percentage; and (5) NRDC concluded that the data presented in its report did not show any significant improvement in rangeland condition over the data in its 1985 rangeland status report, and BLM noted that no substantial change should be expected to occur within only a 4-year period. Public Land Management: Observations on Management of Federal Wild Horse Program. GAO/T-RCED-91-71. June 20, 1991. and anticipated full statewide implementation in about 4 to 5 years; (2) published a rule in September 1990 making it difficult for one person to gain control over a large number of horses; and (3) took such actions to improve the prison halter training effort as establishing quality standards for the training being provided, implementing tighter controls over the age of horses receiving training, and limiting the amount of time horses could spend in training facilities. Rangeland Management: Current Formula Keeps Grazing Fees Low. GAO/RCED-91-185BR. June 11, 1991. BACKGROUND: Pursuant to a congressional request, GAO: (1) assessed the soundness of the formula for computing grazing fees on most federal lands; and (2) compared the formula results to those of alternative formulas using updated cost and price data. FINDINGS: GAO found that: (1) although the current formula kept grazing fees low, it failed to recover reasonable program costs, since it did not produce a fee that covered the government’s cost to manage the grazing program; (2) the current formula also failed to follow the rise in grazing land lease rates paid for private land and to provide a revenue base that could be used to better manage and improve federal land so that it would remain a productive public resource in the future; (3) alternative formulas produced higher fees than the current formula and tended to increase the fees faster over time; and (4) economists preferred a formula that would adjust a base value by a single index and make no additional adjustments for the rancher’s ability to pay. Abandoned Mine Reclamation: Interior May Have Approved State Shifts to Noncoal Projects Prematurely. GAO/RCED-91-162. June 7, 1991. BACKGROUND: Pursuant to a congressional request, GAO reported on the Department of the Interior’s Office of Surface Mining Reclamation and Enforcement’s (OSMRE) process for allowing states to spend federal surface coal mine reclamation funds to address noncoal reclamation problems, focusing on whether OSMRE ensured that states met the certification requirements. (SMCRA) priorities related to public health, safety, and general welfare, restoration of land and water resources and the environment, research and development, and public facilities and land; (3) to receive discretionary funds, states needed to show that they had reclamation needs as reflected in a national inventory of abandoned coal mine land problem areas; (4) coal-related reclamation projects competed with noncoal reclamation sites for funds that were limited to state share monies; (5) when approving a certification request, OSMRE did not independently verify whether a state had addressed all priority-3 through –6 coal projects, relying on the governor’s certification statement that all coal problems had been addressed; (6) the lack of OSMRE policy and guidance to address SMCRA certification requirements contributed to the confusion over certification; and (7) OSMRE did not effectively communicate that states would lose further access to discretionary funds once the certification had been approved. Coal Mine Subsidence: Several States May Not Meet Federal Insurance Program Objectives. GAO/RCED-91-140. May 28, 1991. BACKGROUND: Pursuant to a congressional request, GAO examined: (1) the Department of the Interior’s Office of Surface Mining Reclamation and Enforcement’s (OSMRE) efforts to implement the federally assisted coal mine subsidence insurance program; and (2) six states’ efforts to develop self-sustaining insurance programs. FINDINGS: GAO found that: (1) after 5 years experience with the program, two of the six states that received grants may not be progressing toward self-sustainability; (2) state officials noted that their participation rates were too low to generate sufficient premium income to meet the insurance reserve requirement for anticipated claims; (3) state officials also noted that low participation rates greatly increased the risk that a major subsidence event would threaten solvency; (4) OSMRE lacked effective management of federal grants and did not provide the oversight necessary to ensure that program objectives were met; and (5) OSMRE cited the limited funds involved and the resources needed to actively participate in state-administered programs as the reason for its passive grants management. Rangeland Management: Forest Service Not Performing Needed Monitoring of Grazing Allotments. GAO/RCED-91-148. May 16, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Forest Service efforts to implement recommendations to: (1) ensure that range managers identify all grazing allotments thought to be overstocked or in declining condition; and (2) concentrate monitoring and other range management activities on those lands. FINDINGS: GAO found that: (1) of the 9,217 grazing allotments in the Service’s 6 western regions, range managers identified 2,183 allotments as in declining condition or overstocked; (2) the Service made little progress in conducting the follow-up monitoring necessary to identify improper grazing practices and devise corrective action; (3) the Service attributed its limited monitoring to staff constraints and limited resources; (4) although the Service gave priority attention to monitoring allotments classified as declining or overstocked, five regional offices monitored only 13 percent of such allotments; and (5) the number of Service range managers decreased from over 1,000 to under 700 between fiscal years 1979 and 1990. Public Land Management: Issues Related to the Reauthorization of the Bureau of Land Management. GAO/T-RCED-91-20. March 12, 1991. BACKGROUND: GAO discussed the Bureau of Land Management’s (BLM) management the public lands and issues related to BLM reauthorization. GAO noted that although BLM continues to make only limited progress in accomplishing its land management responsibilities, it has taken such specific actions as: (1) establishing management plans for all BLM riparian and wetland acreage and estimating the additional funds and staff needed for implementation; (2) establishing national agreements with 12 private wildlife and conservation organizations to foster projects to improve wildlife and fish habitats; and (3) issuing a hardrock mining policy that required all miners disturbing more than 5 acres to post financial guarantees to ensure reclamation. GAO believes that provisions of the proposed reauthorization legislation would improve BLM management of public lands, but staffing and funding constraints could significantly impede its progress. Public Land Management: Attention to Wildlife Is Limited. GAO/RCED-91-64. March 7, 1991. considered wildlife interests during federal land use planning processes; and (2) the impact of federal management practices on wildlife conditions. FINDINGS: GAO found that: (1) no legislation existed that specified an appropriate level of consideration of wildlife interests in federal land management; (2) wildlife protection and enhancement activities received between 3 percent and 7 percent of available BLM and Service staffing and funding; (3) while BLM and the Service uniformly considered wildlife needs during land use planning, when conflicts occurred, the agencies frequently favored consumptive interests over wildlife needs; (4) BLM and the Service did not always implement actions to benefit wildlife that were included in land use plans; (5) data were not available to judge the overall effect of BLM and Service policies and practices on wildlife conditions; (6) the agencies’ land use priorities, budgets, and staffing met grazing, logging, and mining objectives first and provided for wildlife interests as circumstances permitted; and (7) BLM and the Service initiated efforts to provide more balanced consideration of wildlife needs in their management activities. Sale of NPRs & Oil Shale Reserves. GAO/RCED-96-28R. October 17, 1995. BACKGROUND: Pursuant to a congressional request, GAO reviewed proposed legislation regarding the sale of six Naval Petroleum and Oil Shale Reserves, focusing on whether the proposed sales method will provide an equal opportunity to all prospective buyers and yield maximum funds to the federal government. GAO noted that the proposed sales method: (1) should provide an equal opportunity to all prospective buyers; (2) stipulates the acceptance of the highest responsible bid that meets the minimum acceptable price; and (3) should yield a fair market value to the federal government. Terminating Federal Helium Refining. GAO/RCED-95-252R. August 28, 1995. BACKGROUND: Pursuant to a congressional request, GAO provided information on the possible consequences of proposed legislation to end the Bureau of Mines’ production and sale of refined helium. GAO noted that: (1) the Bureau estimates that production and sale of refined helium could cease within 6 months after legislation is passed and that other related actions could be completed within 2 years if no contingencies are encountered; (2) if the helium program is terminated, program costs would decrease to $20.6 million in the first year and to $3.5 million in the second year; (3) the Bureau estimates that all environmental requirements could be met within 2 years after passage of legislation; (4) the Bureau plans to use standard federal property disposal procedures to dispose of all property associated with helium refining, but private helium refiners have shown no interest in purchasing these assets; (5) about two-thirds of the helium program’s employees would be subject to a reduction in force if the program is discontinued, while the remaining one-third would be retained or retired; (6) the National Aeronautics and Space Administration (NASA) is the only federal user concerned about the availability and cost of refined helium to meet its unique and sporadic needs if the program is terminated; and (7) the Administration’s proposal to terminate the helium program differs from the House’s proposal and calls for allowing more time for termination, mainly to accommodate NASA needs, abolishing the Helium Fund and depositing sale proceeds into the U.S. Treasury, and capping remaining program spending at $5 million annually. Trans-Alaska Pipeline: Actions to Improve Safety Are Under Way. GAO/RCED-95-162. August 1, 1995. ABSTRACT: The Trans-Alaska Pipeline System, run by the Alyeska People Service Company, transports nearly 20 percent of the nation’s domestically produced oil and has operated for nearly 20 years without a major oil spill. However, throughout the pipeline’s years of construction and operation, problems with the condition of the pipeline, the quality assurance program of its operator, and the effectiveness of government monitoring have been reported. These problems have resulted in continued congressional oversight. A study commissioned by the Interior Department in August 1993 identified 22 categories of substantial—and potential threatening—deficiencies in Alyeska’s management and operation of the pipeline. Other audits have identified additional deficiencies. This report (1) assesses Alyeska’s progress in correcting these deficiencies; (2) determines whether the corrective measures planned for three areas—electrical systems, quality, and preventive maintenance—will address the deficiencies; (3) discusses whether regulators are improving regulatory oversight of the pipeline; and (4) identifies the root causes of the deficiencies. Sale of NPR-1. GAO/RCED-95-255R. August 1, 1995. BACKGROUND: Pursuant to a congressional request, GAO reviewed draft legislation proposing the sale of Naval Petroleum Reserve Number 1 (NPR-1), focusing on: (1) the proposed sales method; and (2) improvements to ensure that the government receives the best value from the sale. GAO noted that: (1) the proposed sales method appears to provide equal opportunity to all prospective buyers and yield fair market value for NPR-1; (2) the sales method stipulates that the government will accept the highest responsible offer that meets or exceeds the minimum acceptable price, which will be based on the net present value of NPR-1, and requires that NPR-1 ownership shares be finalized before the sale; and (3) under the proposed legislation, the Secretary of Energy may use independent experts to value NPR-1 and finalize ownership shares and an investment banker to administer the sale. GAO also noted that the draft legislation needs to: (1) provide for adequate notification and information dissemination to improve potential buyers’ participation; (2) resolve conflicting time requirements; and (3) provide for congressional notification of potential obstacles to the sale. Naval Petroleum Reserve: Opportunities Exist to Enhance Its Value to the Taxpayer. GAO/T-RCED-95-136. March 22, 1995. ABSTRACT: This testimony focuses on ways to enhance the profitability of the Naval Petroleum and Oil Shale Reserves. Regardless of what alternative is finally adopted for the Naval Petroleum and Oil Shale Reserves, GAO believes that the goal should be to protect the interests of taxpayers by getting a reasonable return on these assets. If a decision is made to form a government corporation, care should be taken to establish a financially sound corporate entity with as few government restrictions on earning profits as is possible. If a decision is made to sell the reserves, the government must ensure that it receives fair market value for them. Steps can be taken now, such as giving the Energy Department more flexibility to set the rate of production so as to maximize profits and marketing Elk Hills oil more aggressively, that would be compatible with any more fundamental management changes. Naval Petroleum Reserves. GAO/RCED-95-141R. March 17, 1995. BACKGROUND: GAO provided information on the Naval Petroleum and Oil Shale Reserves. GAO noted that: (1) although the mission of the reserves has changed from emphasizing energy security to providing revenue to the Treasury, few measures have been taken to maximize profits; (2) profits could be increased at the Elk Hills, California oil field by allowing the Department of Energy to set the rate of production, finalizing equity shares, sharing the risks of drilling wells to encourage new drilling ventures, establishing a more reliable price index, marketing Elk Hills oil more aggressively, lifting the ban on exporting Alaskan oil, and eliminating certain requirements and preferences; (3) establishing the Elk Hills field as a government corporation could increase profits greatly; (4) selling the reserves could result in a great return to the Treasury if the government set a sufficiently high minimum price and established a competitive bidding process; and (5) it is unclear what benefits could be realized by operating a government corporation to manage Elk Hills in fiscal year (FY) 1996 and sell it in FY 1997, as the Administration has proposed. Naval Petroleum Reserve: Opportunities Exist to Enhance Its Profitability. GAO/RCED-95-65. January 12, 1995. that expires in July 1995. Chevron believes that it can run the reserve more profitably than the government can, and in May 1995 it proposed taking over reserve operations. Later, the Energy Department (DOE) suspended negotiations with Chevron on this proposal and recently began to solicit interest from other parties to operate the reserve. Like Chevron, DOE wants to lower the costs of operating the reserve. This report explores actions that DOE and Congress can now take to improve the reserve’s profitability. Mineral Resources: BLM Needs to Improve Controls Over Oil and Gas Lease Acreage Limitation. GAO/RCED-95-56. December 29, 1994. ABSTRACT: The Bureau of Land Management’s (BLM) internal controls cannot guarantee that federal oil and gas leases are not issued to parties who have exceeded the Mineral Leasing Act’s acreage limitation. BLM allows oil and gas lessees to self-certify that they have not exceeded the acreage limitation, and although the agency has procedures for auditing compliance with the requirement, BLM has not done a compliance audit since 1993 because it considers it a low priority. Even when audits were done, BLM’s strategy for selecting lessees was ineffective because it did not target parties for approaching or appearing to exceed the acreage limitation. Finally, BLM has allowed companies that share the same officers, directors, or major stockholders to be considered separate leaseholders under the acreage limitation. GAO discovered one lessee who had exceeded the limitation by more than 190,000 acres in Wyoming and by nearly 27,000 acres in Nevada. Similarly, by presuming that companies are affiliated when they share the same officers, directors, or major stockholders, GAO identified five firms whose aggregate acreage exceeded the limit by more than 800,000 acres in Wyoming, 435,000 acres in New Mexico, and 86,000 acres in Nevada. Mineral Resources: Federal Coal-Leasing Program Needs Strengthening. GAO/RCED-94-10. September 16, 1994. environmental impacts of additional coal leasing, and (3) consider projected demand in coal-leasing decisions. Naval Petroleum Reserve: Limited Opportunities Exist to Increase Revenues From Oil Sales in California. GAO/RCED-94-126. May 24, 1994. ABSTRACT: The government-owned and operated Naval Petroleum Reserve (NPR) in Elk Hills, California—the seventh largest oil field in the lower 48 states—generated oil sales revenues of $327 million in 1992. The Energy Department (DOE) sells most of this oil to California refiners through competitive bids. The prices received by the government for this oil have been lower than prices for crude oil in other parts of the country. GAO concludes that it will be difficult for DOE to boost revenues from NPR oil sales by selling oil to Gulf Coast or midcontinent oil refineries because this oil is of lower quality than other available crudes and shipping costs are high. This report explores other ways that DOE may be able to increase revenues. For example, DOE bills its customers more often than private oil producers do, resulting in buyers making lower bids to compensate for the higher administrative costs. DOE also does not market its oil as aggressively as private producers do. In testimony before Congress, GAO summarized this report and also discussed (1) the relative priority that should be given to several options for improving the readiness and expansion of the Strategic Petroleum Reserve and (2) the evolving mission of the International Energy Agency; see: Energy Policy: Energy Policy and Conservation Act Reauthorization, by Victor S. Rezendes, Director of Energy and Science Issues, before the Subcommittee on Energy and Power, House Committee on Energy and Commerce. GAO/T-RCED-94-214, May 25, 1994 (15 pages). Offshore Oil and Gas Resources: Interior Can Improve Its Management of Lease Abandonment. GAO/RCED-94-82. May 11, 1994. costs. GAO focuses on MMS’ actions in the Gulf of Mexico because almost all Outer Continental Shelf oil and gas structures are located there. Mineral Resources: H.R. 3967—A Bill to Change How Federal Needs for Refined Helium Are Met. GAO/T-RCED-94-183. April 19, 1994. ABSTRACT: H.R. 3967 would change how the federal government’s helium needs are met by shifting helium refinement from the Interior Department’s Bureau of Mines to the private industry. In addition, the bill would repay the helium program debt. Whether the federal budget will be helped or harmed by this legislation will depend on whether private industry can sell refined helium to the government at a lower price. Revenues from the disposal of the existing helium inventory could also affect the federal budget. The choice between Interior and the private industry to meet federal helium needs is ultimately a public policy decision. GAO believes that H.R. 3967 provides a viable alternative for meeting current and foreseeable federal needs for helium with the potential for budgetary savings and repayment of the helium program debt. Mineral Resources: Hardrock Mining Reclamation. GAO/T-RCED-93-67. August 5, 1993. ABSTRACT: More than five years ago, GAO reported that it would cost nearly $300 million to reclaim abandoned, suspended, or unauthorized hardrock mining operations on federal land in 11 western states; cleanup estimates since then have ranged as high as $71.5 billion. No federal program or funding sources now exists to ensure that past hardrock reclamation problems on government and private land are remedied. Accordingly, any public policy decision on how best to address these reclamation needs will have to carefully consider the workability of such a program and the source of funding. Arctic National Wildlife Refuge: An Assessment of Interior’s Estimate of an Economically Viable Oil Field. GAO/RCED-93-130. July 9, 1993. quantities of oil. This conclusion, however, does not take into account uncertainties in a field’s development potential that could arise from variations in future oil prices or costs. Given the uncertainties of future economic variables, such as oil prices and discount rates, GAO believes that Interior should have developed ranges of minimum economic field size estimates for each prospect and then run its model using the derived field sizes. This would have yielded a greater range of values to account for the uncertainty associated with estimating what constitutes an economically viable oil field in the refuge. Mineral Resources: Meeting Federal Needs for Helium. GAO/T-RCED-93-44. May 20, 1993. ABSTRACT: The federal government uses helium in the space program, weapons systems, and superconductivity research. The Helium Act of 1960 authorizes the Interior Department to conserve, buy, store, produce, and sell helium to meet federal needs. The act also requires federal agencies to buy most of their helium from the Bureau of Mines. GAO testified that the Bureau of Mines has acted to meet the act’s objectives. In addition, the helium program debt, which overshadows meaningful debate on the merits of the program, could be canceled without adversely affecting the federal budget. Finally, a reassessment of the objectives of the helium act is in order. Trans-Alaska Pipeline: Projections of Long-Term Viability Are Uncertain. GAO/RCED-93-69. April 8, 1993. at the North Slope. GAO also looked at the reasonableness of DOE’s belief that it will take 10 to 12 years to develop new oil fields in the refuge. Mineral Royalties: Royalties in the Western States and in Major Mineral-Producing Countries. GAO/RCED-93-109. March 29, 1993. ABSTRACT: The Mining Law of 1872 governs mining for most minerals on federal lands, the vast majority of which are found in the western states and Alaska. This legislation allows individuals to stake claims on federal lands and mine ore, including copper, gold, and silver, without compensating the government. In contrast, the government has been receiving royalties for coal and natural gas on federal lands since the 1920s. Congress has considered but has yet to amend the law to ensure that the public receives a fair return for minerals extracted. This report looks at how 12 western states—Alaska, Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming—share in the proceeds from minerals mined on state lands and on federal and private lands within each state. GAO also discusses how Australia, Canada, and South Africa—three of the largest mineral-producing countries—share in the proceeds from minerals mined in those countries. Mineral Resources: Meeting Federal Needs for Helium. GAO/RCED-93-1. October 30, 1992. ABSTRACT: Federal agencies use helium in everything from space programs to superconductivity research. The Helium Act of 1960, which seeks to conserve and provide a steady supply of this inert gas for essential government activities, requires federal agencies to buy most of their helium from the Department of the Interior’s Bureau of Mines. The act further provides that Interior price federal helium so that revenues from sales cover all program costs. This report discusses (1) actions that the Bureau has taken to meet the objectives of the 1960 act; (2) issues that should be considered when Congress decides how to meet current and foreseeable federal needs for helium, including whether the program debt in the Helium Fund should be cancelled or repaid; and (3) three alternatives for meeting federal needs for helium—continue the Bureau’s existing program, require that all federal needs be met by the private sector, or allow federal agencies to choose to buy helium from either the Bureau or private industry. Royalty Compliance: Improvements Made in Interior’s Audit Strategy, But More Are Needed. GAO/RCED-93-3. October 29, 1992. ABSTRACT: During the past several years, the Department of the Interior has been collecting about $4 billion in royalties each year from oil and gas companies that hold mineral leases on federal or Indian lands. Although the Minerals Management Service (MMS) has substantially improved its strategy for auditing royalty payers, these audits still do not provide reasonable assurances that such royalty payments comply with applicable laws, rules, and regulations. The amount of royalties actually audited or verified is very small, increasing the likelihood that noncompliance will go undetected. In addition, the judgmental samples are not representative of all payers and leases; consequently, MMS cannot determine with any degree of confidence such things as the level of compliance by payers or the magnitude of underpayment—that is, the royalties at risk. MMS can, however, require payers to do the additional work needed to correct the system problems found by audits and to compute any additional royalties due. An MMS task force issued a report in June 1991 recommending major changes to improve MMS’ strategy for auditing royalty payers, including the use of statistical sampling, a measure GAO supports. Mineral Resources: Value of Hardrock Minerals Extracted From and Remaining on Federal Lands. GAO/RCED-92-192. August 24, 1992. ABSTRACT: GAO surveyed mineral operators on the value of eight hardrock minerals—barite, copper, gold, lead, limestone, molybdenum, silver, and zinc—extracted from public lands in 12 western states. According to the questionnaire responses, the total value of these eight minerals extracted during 1990 was at least $1.2 billion. Almost $1 billion of this came from one state—Nevada. The total value of the remaining mineral reserves on federal lands at the end of 1990 was estimated at almost $65 billion. Mineral Resources: Proposed Revision to Coal Regulations. GAO/RCED-92-189. August 4, 1992. would redefine commercial quantities, cutting the required level of coal production from one percent of recoverable reserves to 0.3 percent. This change would significantly reduce the minimum production level now required to retain a federal coal lease. This report examines BLM’s justification for the proposed change. Location Dates for Mining Claims. GAO/RCED-92-199R. June 16, 1992. BACKGROUND: Pursuant to a congressional request, GAO commented on whether claims for lands discussed in its report on the mining law’s patent provision, were located before the land escalated in value and the dates the claims were located. GAO noted that: (1) the claims for the 20 patent and 12 patent application sites were located from 1893 to 1986; (2) it identified the dates, but did not independently verify the dates; and (3) the value of the land at the time it was claimed was not an issue raised in its report. Trans-Alaska Pipeline: Ensuring the Pipeline’s Security. GAO/RCED-92-58BR. November 27, 1991. ABSTRACT: The Trans-Alaska Pipeline System is responsible for transporting nearly a quarter of the nation’s domestically produced crude oil. This report reviews the security of the pipeline. It discusses (1) what federal and state agencies have done to assess the vulnerability of the pipeline to terrorists and (2) what these agencies and the Alyeska Pipeline Service Company have done to protect the pipeline. Mineral Resources: Federal Helium Purity Should Be Maintained. GAO/RCED-92-44. November 8, 1991. otherwise be degraded. Because larger volumes of the mixture of natural gas and helium must then be processed to extract and refine the less pure helium, the government could incur additional losses as high as $23.3 million in 1991 dollars through 2050. Mineral Resources: Interior’s Use of Oil and Gas Development Contracts. GAO/RCED-91-1. September 17, 1991. ABSTRACT: To prevent the concentration of control over federal oil and gas resources in a few companies or individuals, Congress has limited number of acres of oil and gas leases that one party may control in a single state. An exception to this limitation involves lease acreage within the boundaries of development contracts. These contracts permit oil and gas lease operators and pipeline companies to contract with enough lessees to economically justify large-scale drilling operations for the production and transportation of oil and gas, subject to approval by the Secretary of the Interior, who must find that such contracts are in the public interest. Since 1986 Interior has entered into or approved 10 contracts with 12 lease operators for exploration of largely unleased federal lands—ranging from about 180,000 to 3.5 million acres in four western states—and has designated them as developmental contracts. GAO believes that the 10 contracts do not satisfy the legal requirements for development contracts because they are for oil and gas exploration on largely unleased federal lands, rather than for developing existing leases. By designating the 10 contracts as development contracts, Interior has enabled 9 of the 12 contract parties to accumulate lease acreage that vastly exceeds the statutory acreage limitation. All nine of the contract parties were major or large independent oil companies. As a result, other parties who wish to participate in developing federal oil and gas resources within the four states may be adversely affected because the parties to Interior’s contracts have been able to compete for and obtain lease acreage beyond the statutory acreage limitation. Although Interior believes that the Secretary has the discretion under law to use development contracts in the current manner, in April 1989 it ceased issuing these contracts pending completion of GAO’s review. Congress needs to resolve the matter by amending mineral leasing laws to expressly permit or prohibit Interior to enter into or approve development contracts for oil and gas on largely unleased federal lands or to increase or remove the acreage limitation. Trans-Alaska Pipeline: Regulators Have Not Ensured That Government Requirements Are Being Met. GAO/RCED-91-89. July 19, 1991. BACKGROUND: Pursuant to a congressional request, GAO examined the adequacy of regulatory oversight of the Trans-Alaska Pipeline System (TAPS), focusing on TAPS: (1) operational safety; (2) oil spill response capabilities; and (3) ability to protect the environment. FINDINGS: GAO found that: (1) several federal and state agencies had TAPS monitoring, oversight, and enforcement responsibilities; (2) regulators essentially accepted the pipeline operation contractor’s reports regarding TAPS conditions and did not independently evaluate corrosion prevention and detection systems; (3) although aware of deficiencies in the corrosion prevention and detection systems, regulators did not direct the contractor to take action until after the contractor detected significant pipeline corrosion in 1989; (4) regulators conducted little oversight of terminal operations; (5) regulatory review of the oil-spill response plan was cursory until after the Exxon Valdez oil spill, after which federal and state regulators reevaluated oil-spill risks and response capabilities; (6) regulators do not plan to require the contractor to conduct a drill to fully test its response capabilities; (7) there was no long-term monitoring program to assess TAPS overall environmental impact, making it difficult to assess oil-spill impacts or to identify the most appropriate containment, cleanup, and disposal technologies; (8) regulators did not have adequate systems to carry out their oversight responsibilities, did not dedicate sufficient staff for monitoring pipeline activities, and did not coordinate oversight activities to ensure comprehensive monitoring of all pipeline activities; and (9) several regulators assigned staff to a joint oversight office composed of federal and state agencies with statutory authority over TAPS. Mineral Resources: Increased Attention Being Given to Cyanide Operations. GAO/RCED-91-145. June 20, 1991. operations on federal land in Nevada, California, and Arizona, with 113 on lands managed by the Bureau of Land Management (BLM) and 6 on lands managed by the Forest Service; (2) cyanide operators reported over 9,000 cyanide-related wildlife deaths, mostly involving migratory waterfowl, between 1984 and 1990; (3) cyanide operators typically used hazing techniques to scare wildlife away from operations, but they were not as effective over the long term as covering or fencing cyanide ponds; (4) examination of 31 inadvertent cyanide discharges from operations indicated minimal environmental damage; (5) BLM, the Forest Service, state agencies, and other federal agencies had adequate authority to regulate cyanide operations and enforce laws to protect wildlife and the environment from their potential hazards, but there was little coordination among the agencies, and the agencies had varying reporting requirements regarding cyanide operations, discharges, and wildlife deaths; (6) in August 1990, BLM issued a cyanide management policy, and Nevada recently enacted legislation requiring operators to obtain permits for cyanide ponds and report wildlife deaths, but Arizona, California, and the Forest Service lacked overall cyanide management policy; and (7) BLM required quarterly inspection of cyanide operations, but the states and the Forest Service did not have minimum inspection requirements. Mineral Revenues: Interior Used Reasonable Approach to Assess Effect of 1988 Regulations. GAO/RCED-91-153. May 30, 1991. tribes expressed concern that the reports did not analyze data by individual state and tribe. Tax Incentives and Enhanced Oil Recovery Techniques. GAO/T-GGD-91-36. May 21, 1991. BACKGROUND: GAO discussed the use of tax incentives for increasing domestic oil production and exploration, focusing on enhanced oil recovery (EOR) techniques. GAO noted that: (1) Congress only sporadically reviewed tax expenditures, rarely compared their effectiveness to alternative mechanisms for achieving similar goals, and did not subject them to overall limits to control their total budgetary impact; (2) tax incentives for domestic oil production, in the form of building up the strategic petroleum reserve or such trade restrictions as tariffs or quotas, would increase production; (3) government subsidies for the use of EOR techniques would encourage firms to undertake risky petroleum exploration activities that could result in financial loss; (4) the tax expenditure approach favored projects that were close to being viable without the tax break and generated fewer inefficient projects than direct subsidies; (5) tax expenditures aimed at certain activities, such as EOR methods, offered the potential for giving a better return on the tax dollar; (6) it would be more cost-effective to target tax incentives at activities that did not already receive substantial tax breaks than at types of investments that already were eligible for favorable treatment; and (7) environmental effects must be considered in evaluating costs and benefits of the increased use of EOR. Mineral Revenues: Potential Cost to Repurchase Offshore Oil and Gas Leases. GAO/RCED-91-93. February 22, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the range of potential costs to the federal government for the cancellation of 123 oil and gas leases offshore Alaska, Florida, and North Carolina. FINDINGS: GAO found that the: (1) bonuses, rents, and associated interest ranged from $889.4 million to $970.7 million for the 123 leases, as of December 31, 1990; (2) federal government was only obligated to pay the lessee the fair value of the lease or sunk costs plus interest which accrues from lease suspension to lease cancellation; (3) lessees spent $1.5 million for the Alaska leases, up to $21 million for the Florida leases, and $20 million for the North Carolina leases; and (4) federal government, as of December 31, 1990, would be required to reimburse the lessees for about $1 billion under the sunk cost approach if it cancelled the 123 leases. Animas-La Plata Project: Status and Legislative Framework. GAO/RCED-96-1. November 17, 1995. ABSTRACT: The Interior Department’s Animas-La Plata Project was designed to store water and divert it to arid regions in southwestern Colorado and northwestern New Mexico, mainly by channelling water from the Animas River to the La Plata River basin. Before beginning construction of the project, the Interior Department is required to determine whether the project would jeopardize the continued existence of any endangered species. This report provides information on the history and status of the Animas-La Plata project, the legislative framework provided for the project by the 1988 Colorado Ute Indian Water Rights Settlement Act and the Endangered Species Act, the consultation between the Bureau of Reclamation and the Fish and Wildlife Service under the Endangered Species Act, and the project’s relationship to another congressionally authorized project—the Navajo Indian Irrigation Project. Midwest Flood: Information on the Performance, Effects, and Control of Levees. GAO/RCED-95-125. August 7, 1995. ABSTRACT: The intense rainfall that deluged the upper Mississippi River basin in the spring and summer of 1993 caused the largest flood ever measured at St. Lewis. This unprecedented event in nine midwestern states saw the highest flood crests ever recorded at 95 measuring stations on the region’s rivers. The catastrophic flooding caused 95 deaths and extensive property damage and forced the evacuation of tens of thousands of people. The President declared 505 counties to be federal disaster areas, and estimates of the damage have ranged as high as $16 billion. This report examines the operation of the levees, which are earthen or masonry structures, including floodwalls, that are typically built along rivers to keep floodwaters from overflowing adjacent floodplains. GAO reviews the extent to which (1) the U.S. Army Corps of Engineers’ flood control levees prevented flooding and reduced damage during the event; (2) the federal levees increased the height of the flooding and contributed to the damage; and (3) federal, state, and local governments exercise control over the design, construction, placement, and maintenance of nonfederal levees. Central Arizona Project: Costs and Benefits of Acquiring the Harquahala Water Entitlement. GAO/RCED-95-102. June 5, 1995. ABSTRACT: The Fort McDowell Indian Community Water Rights Settlement Act of 1990 requires the Interior Department to acquire nearly 14,000 acre-feet of water to complete the settlement of the Fort McDowell Indian Community’s water rights claim against Arizona parties and the federal government. The Interior Department acquired the water from the Harquahala Valley Irrigation District, one of 10 irrigation districts that contracted for non-Indian agricultural water from Interior’s Central Arizona Project. This report provides information on how Harquahala became a source of water for the settlement, the federal government’s costs to acquire the water, and the benefits accrued to the parties involved in the acquisition. The report also discusses the status of Agriculture Department loans made to Harquahala landowners. Water Quality: Information on Salinity Control Projects in the Colorado River Basin. GAO/T-RCED-95-185. May 11, 1995. ABSTRACT: Through fiscal year 1994, the Interior and Agriculture Departments (USDA) spent $362 million on salinity control projects in six states. Interior’s Bureau of Reclamation and USDA estimate that they will spend about $428 million more for additional projects, while Interior’s Bureau of Land Management expects to spend $800,000 in fiscal year 1995. In selecting salinity control methods, the agencies consider several factors, key among them the methods’ effectiveness and cost. According to Interior’s measurements of the salinity control program’s effectiveness, salinity levels in the Colorado River since 1974 have been below limits set by the Clean Water Act. With completion of the projects under construction or planned, salinity levels should stay within the established limits beyond 2010. GAO summarized this report in testimony before Congress; see: Water Quality: Information on Salinity Control Projects in the Colorado River Basin, by James Duffus III, Director of Natural Resources Management Issues, before the Subcommittee on Water and Power Resources, House Committee on Resources. GAO/T-RCED-95-185, May 11, 1995 (8 pages). Water Quality: Information on Salinity Control Projects in the Colorado River Basin. GAO/RCED-95-58. March 29, 1995. ABSTRACT: Through fiscal year 1994, the Interior and Agriculture Departments (USDA) spent $362 million on salinity control projects in six states. Interior’s Bureau of Reclamation and USDA estimate that they will spend about $428 million more for additional projects, while Interior’s Bureau of Land Management expects to spend $800,000 in fiscal year 1995. In selecting salinity control methods, the agencies consider several factors, key among them the methods’ effectiveness and cost. According to Interior’s measurements of the salinity control program’s effectiveness, salinity levels in the Colorado River since 1974 have been below limits set by the Clean Water Act. With completion of the projects under construction or planned, salinity levels should stay within the established limits beyond 2010. GAO summarized this report in testimony before Congress; see: Water Quality: Information on Salinity Control Projects in the Colorado River Basin, by James Duffus III, Director of Natural Resources Management Issues, before the Subcommittee on Water and Power Resources, House Committee on Resources. GAO/T-RCED-95-185, May 11, 1995 (8 pages). Water Resources: Flooding on Easement Lands Within the Red Rock, Iowa, Reservoir. GAO/RCED-95-4. December 23, 1994. ABSTRACT: Before the Red Rock Dam and Lake Project near Des Moines, Iowa, began operating in 1969, the U.S. Army Corps of Engineers purchased easements from landowners on 29,000 acres within the reservoir’s boundary. The easements give the Corps the right to occasionally flood the easement lands when the dam is forced to hold back water upstream in the reservoir to prevent flooding downstream. Because of heavier-than-expected rainfall during the 1970s and 1980s, the easement lands were flooded more often than the Corps had estimated. In 1985, Congress authorized a buyout program for easement landowners who were willing to sell their land to the Corps; however, few owners have been interested in selling, and their complaints about flooding have persisted. This report (1) determines whether the property within the Red Rock reservoir’s boundary has been inundated beyond the levels permitted by the easements; (2) recommends whether compensation for the easements should be renegotiated with landowners; and (3) reports on actions that the Corps has taken to implement the buyout program. Water Markets: Increasing Federal Revenues Through Water Transfers. GAO/RCED-94-164. September 21, 1994. ABSTRACT: Most water in the arid western United States delivered through federal projects is used for agriculture, but the demand for water for urban, recreational, and environmental uses is growing. The federal government plays a role in water management in the arid West mainly through water resource projects. Water transfer, in which rights to use water are bought and sold, is seen by many resource economists as a way to reallocate scarce water to new users by allowing those who place the highest economic value on it to purchase it. Those who want more water—such as municipalities—often are willing to pay considerably higher prices for it than the current users, and irrigators who receive subsidized water from federal projects may want to transfer this water to a municipality at a profit. At the same time, these transactions may allow the Bureau of Reclamation to share in the profits. This report examines (1) whether water transfers will boost revenues, (2) how the Bureau could increase its revenues from transferred water, and (3) what issues the Bureau should consider in setting prices for transferred water. Water and Waste Disposal. GAO/RCED-94-229R. June 6, 1994. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Department of Agriculture’s (USDA) Water and Waste Disposal Grant Program, focusing on: (1) how different areas of the country benefit from the program; (2) the program’s matching funding requirements; and (3) how the program has been implemented for Mexican border states and rural Alaskan villages. GAO noted that: (1) most states benefit from the program, but some states use the program more actively than others; (2) Rural Development Administration (RDA) grants generally may not exceed 75 percent of a project’s costs, and rural communities must fund the remaining costs; (3) RDA may fund up to 100 percent of project costs in communities whose residents face significant health risks; (4) RDA has obligated about $25 million of the $50 million in grant funds that Congress specifically targeted for the border states for fiscal years (FY) 1993 and 1994; and (5) RDA anticipates that it will soon begin obligating portions of the FY 1994 grant funds targeted for rural Alaskan villages. Water Transfers: More Efficient Water Use Possible, If Problems Are Addressed. GAO/RCED-94-35. May 23, 1994. ABSTRACT: Debates over how water from western federal water projects should be used have become more heated in recent years. Farmers use more than 80 percent of the western water withdrawn for use. Environmental problems, such as selenium contamination and salinity, have been linked to agricultural irrigation. Moreover, as urban populations, tourism, and environmental awareness continue to grow, the demand for water increases for cities, recreation, and fish and wildlife habitats. Building dams to meet new demand is often not an option because of their high price tags and harmful environmental effects. Advocated by resource economists and others, water markets, in which rights to use water are bought and sold, would allocate water to its highest economic use by allowing those who place the highest economic value on it to buy it. This report examines (1) the costs and benefits of water transfers; (2) how water markets might be structured to address the impacts on parties outside of transfers; (3) the legal, institutional, and other issues that would need to be addressed to implement a federal water market; and (4) how transfers of water from federal projects could be coordinated with state law. Water Subsidies: Impact of Higher Irrigation Rates on Central Valley Project Farmers. GAO/RCED-94-8. April 19, 1994. ABSTRACT: Farmers have received federally subsidized water from the Interior Department’s Central Valley Project for up to 40 years under fixed-rate water service contracts. The fixed rates, however, no longer function as intended; they do not cover Interior’s operating costs and have not been enough to repay virtually any of the $1 billion in construction costs owed. Moreover, environmental and water use problems have been linked to the irrigation carried out under these contracts. Studies by agricultural economists suggest that higher water prices would increase irrigation efficiency and conservation, thereby reducing environmental degradation caused by irrigation and freeing up water now used for irrigation for other uses. This report (1) estimates the impact on farm profits of the higher irrigation rates mandated under 1992 legislation and of further rate increases under various scenarios, (2) estimates the financial benefits to the federal government of increasing the irrigation rates, and (3) determines how farmers can mitigate the impact of higher rates. Central Utah Project Cost Allocations. GAO/RCED-94-65R. January 25, 1994. BACKGROUND: Pursuant to a legislative requirement, GAO reviewed the development of cost accounting standards for the Department of the Interior to follow in allocating costs for the Central Utah Project (CUP). GAO found that: (1) the cost accounting standards developed by the Cost Accounting Standards Board provide a sound basis for allocating CUP costs and additional standards are not needed; (2) its audit of the CUP cost allocation will determine whether Interior’s cost allocation methodology is based on the Board’s standards and whether Interior properly applies the cost allocation methodology; and (3) the Bureau of Reclamation’s experience with the Central Valley Project’s cost allocation should be helpful to Interior in allocating CUP costs. Bureau of Reclamation: Information on the Federal Financial Commitment and Repayment Status of the Central Arizona Project. GAO/T-RCED-94-92. December 10, 1993. ABSTRACT: It is estimated that construction of the Central Arizona Project—a massive water project designed to pump water from the Colorado River as far south as Tucson—will be completed in 1999 at a cost of $4.7 billion, and the federal share could climb from $1.7 billion to upwards of $2.8 billion. The project is expected to provide Arizona residents with flood control, fish and wildlife enhancement, recreation, commercial power, groundwater conservation, and drinking water. This testimony discusses (1) the total financial commitment of the federal government to build the system and (2) the Central Arizona Water Conservation District’s ability to fulfill its obligation to repay allocated project costs. Water Resources: Corps’ Management of Reservoirs in the Missouri River Basin. GAO/T-RCED-94-43. October 11, 1993. ABSTRACT: This testimony focuses on the U.S. Army Corps of Engineers’ management of the Missouri River reservoir system under drought conditions during 1989-90. GAO concludes that the Corps acted consistently with its drought contingency plan in releasing water from the reservoir system during the three-year period and that all of the purposes served by the reservoirs, except flood control, were harmed. The plan does not reflect current economic conditions in the Missouri River Basin. Contrary to the Corps’ beliefs, federal statutes do not require the Corps to give recreation a lower priority than other project purposes—flood control, navigation, irrigation, and hydroelectric power—in deciding on water releases. Congress should consider legislation that would require the Corps to set priorities for operating its reservoir projects on the basis of the economic, environmental, social, and other benefits of all authorized purposes. Water Resources: Factors That Lead to Successful Cost Sharing in Corps Projects. GAO/RCED-93-114. August 12, 1993. ABSTRACT: The U.S. Army Corps of Engineers is required to develop a cost-sharing partnership with local sponsors of water projects that provide flood control, water supply, hydroelectric power, and recreation. The sponsors are generally local and state governments or other government groups, such as flood control districts or port authorities. GAO surveyed sponsors and found that the following three factors contributed most significantly to a successful relationship: (1) good communications between the Corps and the sponsor, (2) the sponsor’s significant involvement in decisions and activities, and (3) the Corps’ responses to the sponsor’s concerns about cost-sharing agreements. Sponsors were concerned about their inability to pay their share of study or project costs. The inability to pay generally related to flood control/damage projects in the Dallas and Chicago regions. The sponsors’ other main concern involved changes in the cost-sharing agreements at different Corps review levels. Clean Water Act: Private Property Takings Claims as a Result of the Section 404 Program. GAO/RCED-93-176FS. August 11, 1993. ABSTRACT: This fact sheet identifies private property takings claims that have been filed with the U.S. Court of Federal Claims as a result of regulatory actions taken under the Clean Water Act. GAO also provides information on the actual and potential liability of the U.S. government—including the amounts of the claims, interest, and attorneys’ fees and other litigation costs—and on federal agencies’ costs in litigating these claims. Water Resources: Federal Efforts to Monitor and Coordinate Responses to Drought. GAO/RCED-93-117. June 8, 1993. ABSTRACT: Collecting and reporting data on drought conditions in the United States is a collaborative, multilevel effort led by the federal government. State and local governments make important contributions of work and funding to this effort. Federal, state, and other users are generally satisfied with the data on drought that are collected and distributed by federal agencies. No permanent federal organization is responsible for monitoring drought conditions and planning the government’s response. Instead, individual agencies carry out these activities and arrange to cooperate with one another. When drought has been severe or has had widespread geographic impact, temporary interagency committees have been set up to coordinate the response. Because of the increasingly severe effects that periodic droughts have had on the economy, however, temporary committees may no longer be able to handle the long-term planning needed for such droughts, promptly resolve policy differences among federal agencies, or coordinate the federal response to drought. Water Resources: Highfield Water Company Should Not Receive Compensation From the U.S. Army. GAO/RCED-93-49. May 10, 1993. ABSTRACT: The Highfield Water Company has claimed that it should receive between $17.7 million and $52 million from the U.S. Army as compensation for lost property and damages. Highfield argues that Fort Ritchie, located in Maryland, excessively pumped the aquifer during periods of drought between 1974 and 1978, thereby depriving the company of water it needed to meet its customers’ needs. As a result, the Maryland Public Service Commission revoked the company’s right to exercise its franchise to sell water to its customers. Highfield is appealing for legislative relief, asserting that it has never received a fair hearing on the merits of its case since court actions were dismissed on technical grounds. After reviewing the case, GAO concludes that Highfield was not damaged by the Fort’s reasonable use of the groundwater and that Highfield neither owned nor had superior rights to the water. As a result, GAO does not believe that Highfield is entitled to any compensation from the Army. Water Resources: The Corps of Engineers’ Dredging Program for Small Business Firms. GAO/RCED-92-239BR. August 3, 1992. ABSTRACT: This briefing report looks at whether the U.S. Army Corps of Engineers program to set aside or restrict part of its dredging contracts for small businesses significantly boosts federal costs because there is less competition for restricted-bid contracts. GAO evaluated existing studies on program costs and competition (measured by the number of bids per contract) done on behalf of large and small dredging firms. GAO also did a separate analysis of dredging contracts the Corps awarded during a recent 31-month period. Water Resources: Future Needs for Confining Contaminated Sediment in the Great Lakes Region. GAO/RCED-92-89. July 17, 1992. ABSTRACT: The U.S. Army Corps of Engineers has built 26 confined disposal facilities since 1974 to hold bottom sediment dredged from harbors, channels, and other waterways in the Great Lakes area. This mud often contains contaminants, such as chemicals from industry or agricultural runoff, that require special handling. Six of the facilities are already filled to capacity, and 18 others are expected to be filled by 2006. Twelve more facilities are planned, and more sites will be needed in the foreseeable future. The Corps is now deciding whether it or state and local governments should pay the construction costs. Construction of more facilities is at a virtual standstill. Because of concerns from communities and environmental groups, finding suitable disposal sites for contaminated dredged material has been difficult and time-consuming. As a result, the Corps has deferred some dredging and commercial and recreational navigation in some areas has been harmed. Bureau of Reclamation: Central Valley Project Cost Allocation Overdue and New Method Needed. GAO/RCED-92-74. March 31, 1992. ABSTRACT: This report examines how the Bureau of Reclamation allocates construction costs for the Central Valley Project. Located in California’s Central Valley Basin, the project is the Bureau’s largest water resource project, with authorized construction costs totaling more than $6.5 billion as of September 1990. While primarily devoted to irrigation, the project also provides flood control, hydroelectric power, and recreation uses. GAO (1) discusses the status of the Bureau’s effort to reallocate project costs in accordance with a 1986 congressional mandate, (2) describes the Bureau’s current cost allocation method, and (3) discusses alternative cost allocation methods. Water Resources: Corps’ Management of Ongoing Drought in the Missouri River Basin. GAO/RCED-92-4. January 27, 1992. ABSTRACT: The Missouri River basin, encompassing all of Nebraska and part of nine other North Central states, is experiencing its most severe drought since the 1930s. GAO reviewed the U.S. Army Corps of Engineers’ management of the Missouri River reservoir system under drought conditions in 1988, 1989, and 1990. Acting consistently with its drought contingency plan, the Corps reduced winter release rates, shortened navigation seasons on the Missouri River, and reduced water levels in the navigation channel. As a result, 17 percent less water was released during the three-year period than would have been released under normal operating conditions. The drought and the Corps’ response to it harmed all reservoir efforts save one—flood control. The Corps’ contingency plan, however, relies on assumptions about the amount of water needed for navigation and irrigation made in 1944 that are no longer valid, and the plan does not reflect the current economic conditions in the Missouri River basin. The Corps’ ongoing study of its operation of the reservoir system is expected to address these issues. The Corps insists that, unless Congress approves changes to existing operating priorities, it must continue to give recreation a lower operating priority than other authorized purposes even if this lower priority results in decreased system benefits. GAO sees no appropriate basis for the Corps’ view. A lawsuit filed in federal court by three upper basin states questions the legality of the Corps’ position on recreation. Water Resources: Local Sponsors’ Views on Corps’ Implementation of Project Cost Sharing. GAO/RCED-92-11FS. November 15, 1991. ABSTRACT: The Water Resources Development Act of 1986 requires the U.S. Army Corps of Engineers to develop a cost-sharing partnership with local sponsors whose active participation and financial commitment are essential to accomplish water resource development projects. The sponsors generally are local or state governments or other public entities, like flood control districts or port authorities, that ask the Corps’ for help. This fact sheet presents the views of local sponsors on the Corps’ implementation of cost sharing under the act, including the sponsors’ views on their relationship with the Corps and the impact of cost sharing on accomplishing proposed projects, such as flood control or navigation projects. Reclamation Law: Changes Needed Before Water Service Contracts Are Renewed. GAO/T-RCED-92-13. October 29, 1991. ABSTRACT: This testimony, which is based on an earlier report (GAO/RCED-91-175, Aug. 22, 1991), addresses changes needed before renewal of long-term water service contracts in the Bureau of Reclamation’s Central Valley Project in California. Significant environmental and water use problems are associated with irrigation practices carried out under existing water service contracts. These irrigation practices have contributed to selenium poisoning and increasing salinity in the San Joaquin Valley; some farmers use Central Valley Project water to produce crops that are also eligible for subsidies under Agriculture Department commodity programs; and with 85 percent of the Central Valley Project water dedicated to irrigation under the contracts, the water supply available for wildlife habitat is inadequate. GAO is concerned that renewing the Central Valley Project’s 238 contracts for the same quantities of water for up to 40 years could severely hamper efforts to address existing and future problems. GAO recommends that Congress place a moratorium on all Central Valley Project contract renewals, while temporarily extending existing contracts, and amend legislation to explicitly allow contract renewals for lesser quantities of water and shorter periods of time. GAO also recommends that the Department of the Interior fully analyze the impact of contract renewal and alternative contract provisions. Water Subsidies: Views on Proposed Reclamation Reform Legislation. GAO/T-RCED-91-90. September 12, 1991. BACKGROUND: GAO discussed four legislative proposals to amend the Reclamation Reform Act of 1982, which permits multiple landholdings to continue to be operated collectively as one large farm while individually qualifying for federally subsidized water. GAO noted that: (1) if the farm operations in the five case studies remain constant, each of the proposals could limit federally subsidized water to some or all of the operations; (2) three of the five large farm operations in the case studies could continue to receive subsidized water on land in excess of the 960-acre limit, under the House bill; (3) under the Senate bill, four of the five large farm operations would be able to continue to receive subsidized water on more than 960 acres; (4) three of the large farm operations could continue to receive subsidized water on land in excess of the 960-acre limit under the Bureau of Reclamation’s draft bill; and (5) the Subcommittee on Water, Power, and Off-Shore Energy Resources’s draft bill could stop the flow of federally subsidized water to more than 960 acres in all five of the case studies. GAO believes that since farmers have ample financial incentive to reorganize their operations in response to any new reclamation legislation enacted, some farmers are likely to reorganize again to be eligible to receive additional federally subsidized water. Reclamation Law: Changes Needed Before Water Service Contracts Are Renewed. GAO/RCED-91-175. August 22, 1991. BACKGROUND: Pursuant to a congressional request, GAO: (1) identified environmental and water use problems associated with the irrigation practices carried out under the Bureau of Reclamation’s water service contracts in the Central Valley Project (CVP); and (2) determined whether contract renewals would allow such problems to continue. FINDINGS: GAO found that: (1) agricultural drainage has degraded the quality of the San Joaquin Valley’s water supply and soil, poisoning wildlife and threatening agricultural productivity with selenium accumulation and increasing salinity; (2) since most CVP water is dedicated to irrigation through water service contracts, the supply of water available for wildlife habitat is not adequate; (3) some farmers use CVP water to produce crops that are also eligible for subsidies under the U.S. Department of Agriculture’s (USDA) commodity programs, causing Congress to express concern over the apparent inconsistency between the Bureau’s programs for increasing agricultural production through inexpensive subsidized water and USDA programs for raising prices while limiting production; (4) increased irrigation efficiency and conservation could reduce environmental degradation caused by agricultural runoff and drainage, while freeing water currently diverted for irrigation and other uses, but the low cost of federal irrigation water is a disincentive to increased irrigation efficiency; (5) the Department of the Interior believes that, since long-term renewal of contracts for the same quantities of water is nondiscretionary, it is not required to change its provisions as a result of environmental impact statements; and (6) continuing irrigation practices carried out under existing contract provisions compromise other national interests such as environmental protection and wildlife conservation. Water Resources: Corps Lacks Authority for Water Supply Contracts. GAO/RCED-91-151. August 20, 1991. BACKGROUND: Pursuant to a legislative requirement, GAO examined whether the Army Corps of Engineers has the legislative authority to operate nine water reservoirs for the purposes for which they are being managed. FINDINGS: GAO found that: (1) with one exception, the Corps has the authority to operate the nine reservoirs for the purposes for which they are being managed; (2) in that exception, the Corps improperly cited the Water Supply Act of 1958 in reallocating storage capacity to municipal and industrial (M&I) water supply and entering into six long-term contracts to supply water to M&I users without expanding those reservoirs; (3) the authority under the Water Supply Act to supply water for M&I needs is limited to what may be accomplished through the construction or expansion of reservoirs, and the act does not provide authority to reallocate existing water storage capacity for M&I purposes at reservoirs previously constructed or modified; and (4) the Corps used the act to enter into 38 water supply contracts and was planning to enter into similar contracts in the future. Water Resources: Corps’ Management of 1990 Flooding in the Arkansas, Red, and White River Basins. GAO/RCED-91-172BR. August 1, 1991. BACKGROUND: Pursuant to a congressional request, GAO examined the Army Corps of Engineers’ operation of its reservoirs in the Arkansas, Red, and White River basins during the May 1990 flooding that caused severe damage in Arkansas, Texas, and Oklahoma to determine whether the Corps followed operating procedures in capturing and releasing the water from nine reservoirs in the three basins before, during, and after the flood. FINDINGS: GAO found that: (1) the Corps generally operated the nine reservoirs in accordance with its operating procedures before, during, and after the May 1990 flooding; (2) there was no evidence that the Corps released water from six of the reservoirs contrary to its procedures; and (3) in two cases, the Corps released water contrary to its operating procedures and prolonged the flooding of rural lands predominantly in Texas and Oklahoma. Water Resources: Bonneville’s Irrigation and Drainage System Is Not Economically Justified. GAO/RCED-91-73. January 31, 1991. BACKGROUND: Pursuant to a congressional request, GAO prepared a: (1) benefit-cost analysis of the Irrigation and Drainage (I&D) system of the Central Utah Project Bonneville Unit; and (2) financial impacts analysis measuring the federal cost of not completing the I&D system. FINDINGS: GAO found that: (1) the federal government spent or was contractually obligated for a total of about $320 million for the I&D system; (2) proposed legislation providing for the completion of the system, with some changes, would cost an additional $178 million in federal funds; (3) completion of the I&D system was not economically justified, since the U.S. economy would realize a benefit of only 28 cents for every dollar of project costs; and (4) the financial impacts on the federal government of not completing the I&D system ranged from savings of $133 million, if Congress decided to reallocate sunk costs, to an additional cost of $54 million if Congress decided to forgive the repayment of sunk costs. Forest Service: Observations on the Emergency Salvage Sale Program. GAO/T-RCED-96-38. November 29, 1995. ABSTRACT: Salvage timber involves dead or dying trees, much of which would be marketable if harvested before it rots. In the past, many sales of salvage timber were delayed, altered, or withdrawn, and some of the timber deteriorated and became unsalable. In response to the millions of acres of salvage timber caused by the devastating fires of 1994, Congress established an emergency salvage timber sale program, which was designed to increase the harvesting of salvage timber by easing environment procedures and eliminating the administrative appeals process. GAO testified that it is too early to say to what extent the changes introduced by the program will boost sales because few sales have been made since the program became effective. Some salvage sale offerings have failed to receive bids mainly because of the terms and conditions of the sales, such as the minimum bid or specific logging requirements or the volume of timber being offered, were unacceptable to potential buyers. In addition, because of the short-term nature of the emergency salvage sale program, more comprehensive information on the universe of marketable salvage timber may help Congress as it assesses the program’s impact and whether additional resources are needed to support it. Forest Service: Distribution of Timber Sales Receipts Fiscal Years 1992-94. GAO/RCED-95-237FS. September 8, 1995. ABSTRACT: Over the years, the Forest Service’s annual reports to Congress have indicated that receipts from the timber sales program exceeded the expense of preparing and administering the sales. However, these reports did not show the extent to which timber sales receipts were distributed to various Forest Service funds or accounts established for specific purposes, such as reforesting the land and making payments to the states in which the forests are located. GAO found that during fiscal years 1992-94, the Forest Service collected nearly $3 billion in timber sales receipts and distributed about $2.7 billion, or 90 percent, to various Forest Service funds or accounts for specific purposes. The Forest Service deposited the remaining receipts—about $300 million—in the General Fund of the Treasury. Outlays for preparing and administering timbers sales totaled about $1.3 billion for the same period. Private Timberlands: Private Timber Harvests Not Likely to Replace Declining Federal Harvests. GAO/RCED-95-51. February 16, 1995. ABSTRACT: Timberlands in Washington state, Oregon, and California are owned by the federal government, state and local governments, and the forest products industry or other private parties. Timber harvest volumes from all these sources have decreased during the past five years. Most notable, however, is the drop on federal lands, mainly as a result of efforts to protect the habitats of threatened or endangered species. This report discusses (1) trend data on private timberland acreage and on volumes of timber harvested; (2) requirements for reforestation and the use of active timber management practices, such as fertilization or thinning, on private timberlands; (3) incentive programs to encourage private landowners to actively manage their timberlands and other factors that influence their land management decisions; and (4) federal tax provision that affect timber management decisions, including the changes that occurred in the 1986 Tax Reform Act. Tongass Timber Reform Act: Implementation of the Act’s Contract Modification Requirements. GAO/RCED-95-2. January 31, 1995. ABSTRACT: In Alaska’s Tongass National Forest, two companies—the Ketchikan Pulp Company and the Alaska Pulp Corporation—have held 50-year contracts to cut timber. The Forest Service maintains that its existing policy provides consistent treatment of credits in contracts that timber harvesters are awarded for building harvest-related roads. GAO disagrees, believing that the policy gives Ketchikan Pulp a competitive advantage by allowing it to apply “ineffective” road credits for a much longer period than timber harvesters that must use short-term contracts. Through the end of fiscal year 1993, Ketchikan Pulp used road credits to pay for 73 percent of the timber harvested. Also, some streamside buffers did not meet the 100-foot minimum. The Forest Service has since taken steps to ensure that this requirement is met. GAO also found that the Forest Service was not following its policy of documenting the environmental effects of changes made to planned timber-harvest boundaries. Forest Service: Factors Affecting Timber Sales in Five National Forests. GAO/RCED-95-12. October 28, 1994. ABSTRACT: In recent years, debate about the future of the national forest system has focused on ensuring that timber harvests do not exceed the forests’ ability to replenish the available supply of timber. An important component of managing forests on a sustained-yield basis is each forest’s “allowable sale quantity”—an estimate of the maximum volume of timber that can be sold from each forest over a 10-year period without impairing other uses of the forest, such as recreation or wildlife habitat. GAO reviewed the allowable sale quantities and the timber sales at five national forests—Deschutes and Mt. Hood in Oregon, Gifford Pinchot in Washington, Ouachita in Arkansas, and Chattahoochee-Oconee in Georgia. The Forest Service did not meet allowable sale quantities in the five forests for a variety of reasons, including (1) limitations in the data and estimating techniques on which the allowable sale quantities were originally based, (2) new forest management issues and changing priorities, and (3) rising or unanticipated costs associated with preparing timber sales and administering harvests. Although forest officials believed that the Service has used the best information available to develop the allowable sale quantities, they later failed to meet these levels. As a result, timber sales for each of the five forests between fiscal years 1991 and 1993 were significantly below the average annual allowable sale quantity. Forest Service: Management of Reforestation Program Has Improved, but Problems Continue. GAO/RCED-94-257. September 15, 1994. ABSTRACT: In the 1930 Knutson-Vandenberg Act, Congress attempted to sustain the nation’s forests by establishing a fund—today totaling more than $800 million—to reforest, improve timber stands, and improve other renewable resources in timber sale areas that have been harvested. The Forest Service annually collects about $230 million from timber purchasers for reforestation and other activities and deposits it in the fund. In response to congressional concerns over the adequacy of Forest Service control of these funds and their use for appropriate projects, GAO reviewed the Forest Service’s management of the fund. This report describes (1) how the Forest Service plans, implements, and manages Knutson-Vandenberg projects and (2) what changes the Forest Service has made since 1990 in response to previous internal and Office of Inspector General reviews of the program and what additional changes may be necessary. Forestry Functions: Unresolved Issues Affect Forest Service and BLM Organizations in Western Oregon. GAO/RCED-94-124. May 17, 1994. ABSTRACT: The Bureau of Land Management (BLM), part of the Interior Department, and the Forest Service, part of the Agriculture Department, together manage 7.2 million acres of land in western Oregon. Both agencies manage portions of these lands for timber production and have parallel forestry organizations in several locations. This report examines the possibility of the two agencies consolidating their forestry duties. GAO summarizes these agencies’ past and ongoing reorganization efforts and the potential legal and other constraints affecting any consolidation. Forest Service: Status of Efforts to Achieve Cost Efficiency. GAO/RCED-94-185FS. April 26, 1994. ABSTRACT: Congress requested that the Forest Service prepare a cost study for its timber program that would analyze how to achieve an annual cost reduction of at least five percent. The Forest Service’s April 1993 report on timber cost efficiency discussed such areas as the overall timber program, the program’s organization, the Timber Sale Program Information Reporting System, financial management, and attempts to monitor cost efficiency. In the year since the study was issued, the Forest Service has made progress toward completing 21 of 23 action items targeted for completion by October 1993 or October 1994. The results of the regional offices’ cost efficiency efforts have been mixed. In addition, the Forest Service has undertaken other, nontimber initiatives, such as reorganizing and downsizing, that could improve the agency’s overall efficiency. Overall, from fiscal year 1992 to fiscal year 1993, the Forest Service reduced its timber program expenses nationally by about 7.2 percent. Total annual timber program expenses declined in six of the nine regions during this period. However, six of the nine regions’ timber sales programs showed a net loss when annual expenses were deducted from revenues for fiscal year 1993. Timber Sale Contract Defaults: Forest Service Needs to Strengthen Its Performance Bond and Contract Provisions. GAO/RCED-94-5. October 28, 1993. ABSTRACT: The Forest Service has assessed damages totaling about $302 million against purchasers who defaulted in timber sale contracts between January 1982 and March 1993. The Forest Service has collected about $42 million, or 14 percent, of this amount and has determined that about $136 million is uncollectible for a variety of reasons, such as the bankruptcy or the death of the purchaser. Continuing litigation has been the main reason for the delays in the final disposition of the remaining $124 million, most of which is owed by 14 timber purchasers. When many of these defaulted contracts were awarded, the Forest Service had few safeguards in place to protect the government against losses from defaults. Since then, the Forest Service has begun requiring purchasers to make down payments and has raised the dollar limit on the performance bond that purchasers must provide. In addition, the Forest Service is considering retaining the down payments until the contracts are substantially complete and clarifying the liability provisions in a new performance bond—measures that GAO strongly supports. Cancer Treatment: Actions Taken to More Fully Utilize the Bark of Pacific Yews on Federal Land. GAO/RCED-92-231. August 31, 1992. ABSTRACT: The Pacific yew, source of the anticancer drug Taxol, grows primarily in Pacific Northwest forests managed by the U.S. government. In fiscal year 1991, neither the Forest Service nor the Bureau of Land Management had effective timber sale administrative procedures or utilization standards. As a result, some usable yew bark went uncollected that year. In fiscal year 1992, both agencies in conjunction with Bristol-Myers Squibb Co. and its yew bark collectors have worked to ensure more complete utilization of yew bark. If properly implemented, the agencies’ fiscal year 1992 program plans and associated operational procedures should help ensure that more of this limited and valuable resource is recovered. Forest Service Timber Sales Program: Questionable Need for Contract Term Extensions and Status of Efforts to Reduce Costs. GAO/T-RCED-92-58. April 28, 1992. ABSTRACT: This testimony centers on two issues concerning the timber sales program run by the Forest Service. GAO discusses (1) a 1-year extension in the length of timber sales contracts in response to dramatic reductions in the prices for wood products and (2) the Forest Service’s response to a fiscal year 1991 directive to reduce costs in its timber sales program. Comments on Below-Cost Timber Bills. GAO/RCED-92-160R. April 1, 1992. BACKGROUND: Pursuant to a congressional request, GAO commented on whether two bills regarding below-cost timber sales on national forests addressed three previous GAO recommendations regarding such sales. GAO noted that both of the bills addressed GAO recommendations that the Forest Service: (1) expand the proposed below-cost sales policy beyond forests as a whole to individual sales; and (2) define the minimum rate for timber sales bids as the cost of timber sale preparation and administration and ensure that the sale process recovers those costs. GAO also noted that neither of the bills addressed the recommendation to amend the timber sale process to include an initial below-cost determination during the sale preparation process in order to avoid unnecessary costs. Cancer Treatment: Efforts to More Fully Utilize the Pacific Yew’s Bark. GAO/T-RCED-92-36. March 4, 1992. ABSTRACT: The bark of the Pacific yew is the only approved source of taxol, an anticancer drug that has been shown effective in treating ovarian cancer. The limited supply of Pacific yew bark coupled with existing and potential demand mean that the bark needs to be as fully utilized as possible. For a variety of reasons, however, not all the bark that could have been collected on federal lands in 1991 was collected. Both responsible federal land-managing agencies and private industry are taking or planning actions to more fully use the bark, and increased utilization should be seen in 1992. These actions appear to be consistent with provisions of the Pacific Yew Act of 1991 intended to achieve full utilization of the bark. Forest Service: The Flathead National Forest Cannot Meet Its Timber Goal. GAO/RCED-91-124. May 10, 1991. BACKGROUND: Pursuant to a congressional request, GAO collected information on planned and actual amounts of timber offered for sale from the Flathead National Forest in northwestern Montana. FINDINGS: GAO found that: (1) the Forest Service fell short of its Flathead timber-offering goal for the last 5 years by about 37 percent; (2) the goal for timber sales in a forest plan may not exceed the allowable sale quantity (ASQ), the maximum amount that the forest can produce in perpetuity after giving balanced consideration to other multiple issues in accordance with environmental standards; (3) the Flathead forest plan specified an ASQ of 500 million board feet (MMBF) for the first 5 years; (4) the Forest Service experienced difficulty in offering many proposed sales due to environmental organizations’ concern over their effects on wildlife and water quality; (5) even if planned sales had met all environmental standards, the forest only had sufficient funding to prepare 443 MMBF; (6) the Flathead’s continued inability to meet its original, unattainable ASQ-based goal will contribute to production cutbacks and mill closures as early as fiscal year 1990; and (7) Flathead officials have no immediate plans to revise the present 10-year forest plan ASQ. First Audit of the Forest Service’s Financial Statements. GAO/T-AFMD-91-4. April 25, 1991. BACKGROUND: GAO discussed its audit of the Forest Service’s financial statements for fiscal year (FY) 1988, focusing on whether: (1) there were weaknesses in internal controls; (2) accounting systems adequately accounted for resources received and spent; (3) internal management adequately reported problems; and (4) financial reporting provided accurate and reliable information regarding the efficiency and effectiveness of operations and future resource needs. GAO noted that the: (1) Service’s inaccurate financial information made it difficult to determine the true value of its property; (2) Service reported two violations of the Antideficiency Act, involving its overobligations of National Forest System budgetary resources by $4,348,805, and its overobligation of its FY 1987 Job Corps allotment by $582,550; (3) Service’s timber program accounting system included inaccurate values for timber and related facilities, but the Service subsequently initiated actions to ensure that the system accurately recognizes costs in accordance with generally accepted accounting principles; and (4) Service’s external reports did not include information that accurately reflected the results of its operations or its financial position. Forest Service Needs to Improve Efforts to Reduce Below-Cost Timber Sales. GAO/T-RCED-91-43. April 25, 1991. BACKGROUND: GAO discussed the Forest Service’s below-cost timber sales, focusing on: (1) timber sales that did not recover their associated costs; and (2) Service efforts to reduce below-cost timber sales. GAO noted that: (1) fiscal year 1990 below-cost timber sales resulted in unrecovered timber-sale preparation and administration expenses of at least $35.6 million; (2) unrecovered costs ranged from $14.9 million for large sales and $20.7 million for small sales when only preparation and administration costs were considered, to $68.4 million for large sales and $43.8 million for small sales when all operating costs plus payments to states were calculated; (3) sale preparation and administration costs at the 122 national forests ranged from $15 per thousand board feet of harvested timber to $348 per thousand board feet; and (4) the Service issued a draft policy to reduce losses from below-cost timber sales. In addition, GAO noted that the Service needed to take such additional actions to reduce below-cost timber sales as: (1) extending consideration of below-cost sales to the individual sales level; (2) considering its costs when setting minimum rates for a timber sale; and (3) evaluating whether the benefits of a below-cost sale justify the unrecovered costs prior to incurring most preparation costs. Forest Service Needs to Improve Efforts to Protect the Government’s Financial Interests and Reduce Below-Cost Timber Sales. GAO/T-RCED-91-42. April 24, 1991. BACKGROUND: GAO discussed the Forest Service’s efforts to: (1) collect on defaulted timber sales contracts and reduce further defaults; and (2) reduce the number of below-cost timber sales. GAO noted that: (1) the Service collected about $35 million of the $302 million in damages that it assessed from defaulted contracts and was taking steps to improve its collection processes; (2) the Service’s key contracting measures were similar to other timber sellers’ measures, although the Service and one federal timber seller returned or credited down payments or deposits before contractors substantially completed the contracts; (3) such practices lessened the Service’s security in terms of access to funds in the event of a default; (4) in fiscal year 1990, the Service incurred timber sale preparation and administration expenses of $35.6 million that it could not recover as a result of below-cost timber sales; and (5) preparation and administrative costs varied greatly by forest. GAO also noted that the Service issued a draft policy aimed at reducing losses caused by below-cost timber sales, but the policy left gaps in a comprehensive approach, since the Service: (1) would not subject many below-cost sales to review; (2) did not consider costs when setting minimum prices for advertised timber sales; and (3) did not evaluate on a timely basis whether the benefits of a below-cost sale justified the unrecoverable cost. Better Reporting Needed on Reforestation and Timber Stand Improvement. GAO/T-RCED-91-31. April 16, 1991. BACKGROUND: GAO discussed the Forest Service’s reporting of its reforestation and timber stand improvement activities. GAO noted that: (1) the Service did not provide specific guidance to regional offices on identifying and reporting reforestation and timber service improvement needs; (2) Service reports understated reforestation needs because it failed to report accurate information about areas requiring reforestation following forest fires or other natural disasters; (3) for fiscal year 1990, the Service reported that 1.2 million acres required reforestation or timber stand improvement; (4) nine Service regions used several methods to identify and report reforestation needs resulting from forest fires or other natural disasters; (5) each service region followed its own criteria for defining timber stand improvement needs; and (6) none of the regions certified and reported all reforestation and timber stand improvement achievements, making it difficult for Congress to accurately assess reforestation and timber stand improvement achievements. Tongass National Forest: Contractual Modification Requirements of the Tongass Timber Reform Act. GAO/RCED-91-133. March 28, 1991. BACKGROUND: Pursuant to a legislative requirement, GAO reviewed the Department of Agriculture’s compliance with the Tongass Timber Reform Act, focusing on the implementation of modifications to two long-term timber sale contracts to eliminate the contractors’ competitive advantage over independent short-term contractors. FINDINGS: GAO found that: (1) the Forest Service made extensive revisions to the two long-term timber sale contracts, generally by adopting and modifying provisions from independent short-term timber sale contracts to meet the act’s requirements; (2) all modifications to the long-term contracts, except purchaser road credits, complied with the act’s requirements; and (3) the modifications did not specify how the Service would perform environmental assessments or how large an area they would cover. GAO believes that: (1) although the contract modifications did not specify exactly how the Service would implement them, the modifications will require extensive additional effort on the part of the Service; and (2) the manner in which the Service implements the modifications will determine its compliance with the act’s requirements. Financial Audit: Forest Service’s Financial Statements for Fiscal Year 1988. GAO/AFMD-91-18. March 18, 1991. BACKGROUND: GAO examined the Forest Service’s financial statements for the fiscal year ended September 30, 1988. FINDINGS: GAO found that: (1) the central accounting system did not integrate all separate accounting and reporting systems; (2) internal control policies and procedures within individual accounting and reporting systems failed to ensure that financial information was reliable and in compliance with prescribed accounting principles; (3) the general ledger was unable to produce accurate and timely financial reports, since the Service failed to integrate it with its accounting and reporting systems; (4) the Timber Sale Program Information Reporting System (TSPIRS) was not in accordance with generally accepted accounting principles; (5) the Service violated the Anti-Deficiency Act by overobligating the National Forest System’s funds and the Job Corps account’s allotment; and (6) except as noted, the financial statements presented fairly the Service’s financial position, and the results of its operations, in conformity with generally accepted accounting principals applied on a consistent basis. Forest Service: Better Reporting Needed on Reforestation and Timber Stand Improvement. GAO/RCED-91-71. March 15, 1991. BACKGROUND: Pursuant to a congressional request, GAO analyzed the reliability of the Forest Service reporting on national forest land: (1) needing reforestation or timber stand improvement; and (2) where reforestation or timber stand improvement activities have been successful. FINDINGS: GAO found that: (1) Service reports understated reforestation needs and did not always identify all needs resulting from forest fires and other natural disasters; (2) from fiscal years 1985 to 1990, reported reforestation needs rose from about 822,000 acres to over 1.2 million acres, while reported timber stand improvement needs decreased from 1.5 million to 1.2 million acres; (3) the nine Service regions used several different methods to identify and report reforestation needs resulting from forest fires or other natural disasters; (4) each Service region followed its own criteria for defining timber stand improvement needs; and (5) none of the regions certified and reported all reforestation and timber stand improvement achievements, making it difficult for Congress to accurately assess the reforestation and timber stand improvement achievements. Vacant Positions in the Bureau of Indian Affairs. GAO/RCED-96-14R. October 6, 1995. BACKGROUND: Pursuant to a congressional request, GAO provided information on end of fiscal year (FY) staffing at the Bureau of Indian Affairs (BIA), focusing on the: (1) occupations that have the highest number of vacant positions; and (2) number of vacant positions in law enforcement and social services. GAO noted that BIA: (1) had about 14,600 employees on board at the end of FY 1993 and 13,700 at the end of FY 1994; (2) had over 4,300 vacant positions as of June 1995; (3) occupations with the most vacant positions included laborer, secretary, teacher, forestry aid, and equipment operator; and (4) had 193 vacant positions in law enforcement and 76 vacant positions in social services. Indian Trust Fund Settlement Legislation. GAO/AIMD/OGC-95-237R. September 29, 1995. BACKGROUND: Pursuant to a congressional request, GAO provided draft legislation that is intended to help the Bureau of Indian Affairs reconcile indian trust fund accounts. GAO noted that the draft legislation would require mediated negotiation and binding arbitration to resolve disputed account balances. Navajo-Hopi Relocation Program. GAO/RCED-95-155R. April 27, 1995. BACKGROUND: Pursuant to a congressional request, GAO provided information on the relocation of the Navajo and Hopi Indian Tribes, focusing on: (1) whether the Navajo and Hopi Relocation Office certified more families for benefits than it relocated in 1994; and (2) the number of families that still have to be relocated or certified as of December 31, 1994. GAO noted that: (1) as of December 31, 1994, 4,507 families had applied for relocation assistance, 3,302 families were certified for relocation benefits, and 2,560 families had been relocated; (2) although the Relocation Office certified 160 families for benefits and in 1994, it only relocated 102 of these families; (3) certifications outnumbered relocations mainly because previous ineligibility determinations were reversed; (4) 742 families certified for relocation had not been relocated as of December 31, 1994; (5) the 660 families originally found to be ineligible for assistance could have their original ineligibility determinations overturned during review; and (6) there may be as many as 100 additional Navajo families eligible for relocation assistance, but these families have never applied for such assistance. Indian Health Service: Improvements Needed in Credentialing Temporary Physicians. GAO/HEHS-95-46. April 21, 1995. ABSTRACT: Indian Health Service (IHS) facilities, which provide medical care to more than one million American Indians and Alaskan Natives, supplement their staffs with temporary physicians. But weak policies have led IHS to unknowingly hire doctors who have been disciplined for such offenses as gross and repeated malpractice and unprofessional conduct. IHS does not explicitly require verifying all active and inactive state medical licenses that a temporary physician may have. Further, most IHS facilities that have contracts with companies that supply temporary physicians do not require the companies to inform IHS of the status of all medical licenses a physician may hold. In addition, IHS facilities do not have a formal system for sharing information on temporary physicians who have worked within the IHS medical system. This report also discusses what happens when requested medical services are delayed. Financial Management: Indian Trust Fund Accounts Cannot Be Fully Reconciled. GAO/T-AIMD-95-94. March 8, 1995. ABSTRACT: The Bureau of Indian Affairs (BIA) has spent four years and $16 million to reconcile Indian trust fund accounts. BIA is requesting $6.8 million for fiscal year 1996 to continue with the detailed reconciliation work. However, it is clear that even if more reconciliation work is done, BIA will not be able to guarantee that trust fund account balances are accurate. This is due to missing lease and accounting records; the inability to verify that all earned revenues were collected, posted to the correct account, and disbursed to the proper party; and the lack of accurate, up-to-date ownership information. Because the Indian trust fund accounts cannot be fully reconciled, Congress may want to consider legislating a settlement process in lieu of continuing to fund BIA’s reconciliation effort. Indian Trust Fund Testimony Q&As. GAO/AIMD-95-33R. December 2, 1994. establish a G-Fund through the Department of the Treasury for Indian trust fund investments; (4) even if a G-Fund is established, Interior would still need to provide for both investment advisor and custodian services; (5) the American Indian Trust Fund Reform Act of 1994 establishes a mechanism for tribes to assume management and control of their trust funds; (6) the fractionated ownership group, Individual Indian Money (IIM) Reconciliation working group, and the Land Records working group have been formed to resolve issues concerning IIM accounts; (7) the 6-Point Plan does not address a number of fundamental actions needed to resolve trust fund management problems; (8) the BIA streamlining plan lacks a mission statement and information on how BIA will transfer trust fund management to tribes; and (9) a spokesperson for Indian interests would ensure that Indian interests are fully articulated and considered before program and organizational changes. Indian Food Stamp Proposal. GAO/RCED-95-57R. November 30, 1994. BACKGROUND: Pursuant to a legislative requirement, GAO reviewed the feasibility of eliminating the conditions for tribal organizations to administer the Food Stamp Program on Indian reservations, focusing on: (1) whether Indian tribal organizations have expressed interest in administering the program; and (2) the barriers to and effects of tribal administration of the program. GAO noted that: (1) tribal officials are unaware of federal regulations governing the Food Stamp Program and have expressed little interest in assuming program administration; (2) the barriers that would prevent tribal administration include the statutory cost-sharing requirements and the potential penalties that could be imposed for administrative errors; (3) tribal officials believe that for them to assume program administration, they will need to revise the program’s infrastructure, obtain and train staff to administer the program, and modify certain program regulations to better meet the needs of Indian clients; (4) tribal administration of the Food Stamp program will likely increase administrative costs, Indian enrollment, and benefit distribution; (5) the tribes and the states would incur additional costs for coordinating and sharing information on program participation in both tribally-administered Food Stamp Programs and state-administered assistance programs; and (6) state officials believe that tribal administration of the Food Stamp Program would increase the burden on food stamp recipients participating in both tribally-administered Food Stamp Programs and state-administered assistance programs. Financial Management: Focused Leadership and Comprehensive Planning Can Improve Interior’s Management of Indian Trust Funds. GAO/T-AIMD-94-195. September 26, 1994. ABSTRACT: The Interior Department has initiatives planned or under way to address some of the long-standing problems plaguing management of the Indian Trust Funds, and additional options exist that could help it make other needed improvements. However, Interior’s track record on past attempts at corrective action has not been good. Interior needs a comprehensive plan, focused leadership, and management commitment if it is to carry through on needed improvements. Financial Management: Focused Leadership and Comprehensive Planning Can Improve Interior’s Management of Indian Trust Funds. GAO/AIMD-94-185. September 22, 1994. ABSTRACT: For years, the Interior Department has been unable to correct many serious financial management problems affecting the Indian trust funds, including (1) backlogs in land title and beneficial ownership determinations and recordkeeping, (2) inadequate management of natural resource assets to ensure that all earned revenues derived from natural resources are collected, (3) improper accounting practices, and (4) limited trust fund investment options. In addition to recent management initiatives to implement needed improvements, additional options would more fully address trust fund management problems. Further, more focused leadership, management commitment, and a comprehensive strategic plan would help Interior to effectively address all of its trust fund management responsibilities. Financial Management: Native American Trust Fund Management Reform Legislation. GAO/T-AIMD-94-174. August 11, 1994. programs and operations. Pending trust fund management reform legislation would enhance trust financial management reform initiatives underway at the Department of the Interior. Indian Health Service: Efforts to Recruit Health Care Professionals. GAO/HEHS-94-180FS. July 7, 1994. ABSTRACT: Indian Health Service (IHS) salary schedules for health care professionals are set on a national basis. Thus, the base pay these persons receive does not differ among IHS regions or areas. However, bonuses and allowances may be paid to doctors who agree to work in hard-to-fill locations, such as the Aberdeen Area. In many IHS areas, health care delivery has been hampered by problems in recruiting and retaining health care professionals, particularly doctors. The recruitment and retention of physicians in the Aberdeen Area has been affected by the relatively low pay; inadequate housing for medical personnel on the reservations; remoteness of the reservations; cultural differences between the doctors and their patients; and a general lack of amenities, such as shopping and dining. IHS’ Aberdeen Area has a higher vacancy rate for physicians than all but one other IHS area. The vacancy rate has been particularly high, more than 31 percent, at the Pine Ridge hospital. IHS is now looking at the benefits of using a physician pay structure similar to that used by the Department of Veterans Affairs. Indian Issues: Eastern Indian Land Claims and Their Resolution. GAO/RCED-94-157. June 22, 1994. ABSTRACT: In late 1992, the Golden Hill Paugussett Indian Tribe filed a lawsuit claiming damages and the right to have large tracts of land in Connecticut restored to the tribe. The lawsuit asserted that land historically belonging to the tribe had been transferred without the congressional approval required by the Indian Nonintercourse Act of 1790. In response to concern about Congress’ responsibilities under the act, the unpredictability of such claims, and the hardships they place on current landowners, this report (1) provides information on land claims made by eastern Indians during the past 20 years, (2) determines how these claims were resolved, and (3) identifies steps that Congress could take to mitigate the unpredictability and impact of these claims. BIA Reconciliation Recommendations. GAO/AIMD-94-138R. June 10, 1994. BACKGROUND: Pursuant to a Department of the Interior request, GAO answered questions on two recommendations concerning the reconciliation of Indian trust fund accounts. GAO noted that: (1) its recommendation for an additional reconciliation procedure would ensure that earned revenues are billed and collected and would not delay the Bureau of Indian Affairs’ (BIA) current reconciliation process; (2) reconciliations can only be done in cases where BIA can locate the relevant lease documents; (3) the lack of a complete documentation would impact projections of transaction error rates and reconciliation results; and (4) an accounts receivable system that indicates when payments are due would enhance BIA reconciliation efforts. BIA Trust Fund Reconciliations. GAO/AIMD-94-110R. April 25, 1994. BACKGROUND: Pursuant to a congressional request, GAO provided information on the status of the Bureau of Indian Affairs’ (BIA) efforts to correct long-standing trust fund management weaknesses. GAO noted that: (1) although BIA has made progress toward improving its Indian trust fund reconciliation and certification process, long-standing management problems have impeded BIA ability to maintain proper control and accountability over individual Indian trust accounts; (2) the Indian community has expressed concern over BIA trust fund accounting and the effectiveness of BIA investment practices; (3) BIA trust fund account balances lack credibility because BIA trust funds are not properly reconciled; (4) BIA continues to lack adequate strategic planning, staff and training, trust fund management policies and procedures, and accounting and reporting systems; and (5) BIA needs to develop a strategic Indian trust fund financial management plan, and reconciliation procedures to ensure reliable accounting and reporting and to prevent and detect fund losses. Financial Management: Status of BIA’s Efforts to Reconcile Indian Trust Fund Accounts and Implement Management Improvements. GAO/T-AIMD-94-99. April 12, 1994. control and accountability over trust fund accounts. BIA has been criticized for erroneous allocations of receipts, erroneous payments to account holders, failure to consistently invest trust fund balances, and failure to pay interest. Tribes and individual Indians continue to express concern about the accuracy of BIA’s accounting for trust fund receipts and disbursements and the effectiveness of BIA’s investment practices. Past audits and GAO’s current work on BIA trust funds management continue to show (1) the lack of a strategic plan to guide trust fund management in the future, (2) inadequate staffing and training, (3) a lack of consistent, written trust fund management policies and procedures, and (4) inadequate systems for ensuring reliable accounting and reporting. GAO makes several recommendations aimed at ensuring better control and accountability over Indian trust funds. GAO continues to urge BIA to develop a strategic management plan for improving Indian trust fund operations. Juvenile Justice: Native American Pass-Through Grant Program. GAO/GGD-94-86FS. March 28, 1994. ABSTRACT: This fact sheet provides information on the Native American Pass-Through Grant Program, which provides federal grants to states and localities to help improve their juvenile justice systems. GAO (1) describes how the pass-through grant program works; (2) determines the funding amounts that the states and Indian tribes received under this program for fiscal years 1991 through 1993, and (3) provides examples of how some tribes used the funds. Job Training Partnership Act: Labor Title IV Initiatives Could Improve Relations With Native Americans. GAO/HEHS-94-67. March 4, 1994. ABSTRACT: This report provides information on the Indian and Native American job training program authorized under title IV of the Job Training Partnership Act. The act targets a variety of economically disadvantaged groups, including Native Americans, to receive employment-seeking skills and job training services. GAO discusses (1) the history of the relationship between the Labor Department and the Native American community with respect to the program and (2) the extent to which the act’s funds are used to provide training services, one of four allowable cost categories under that program. GAO also examines disagreements between the Labor Department and Native Americans over proposed changes to program regulations and the reasonableness of such changes. BIA’s Trust Fund Loss Policy. GAO/AIMD-94-59R. January 14, 1994. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Bureau of Indian Affairs’ (BIA) draft policy and procedures to reconcile its Indian Trust Fund account losses. GAO found that: (1) the BIA draft policy does not address some of the weaknesses of the earlier draft; (2) the draft policy’s definition of losses does not include interest that is earned but not credited to the appropriate account; (3) the draft policy does not establish the steps necessary to detect, prevent, document, and resolve trust fund losses, since BIA trust fund systems do not have a mechanism to identify losses; (4) BIA needs to explain the appropriate notification and loss calculation documentation necessary for reporting losses and make its time frames for notification consistent and clear; (5) the draft policy incorrectly states that loss of interest on a certain type of account is not an obligation of the United States; (6) the draft policy lacks procedures for account holders to respond to and comment on BIA decisions; (7) BIA should clarify the draft policy’s language regarding the availability of appropriated funds for reimbursing losses and the reasons for transferring funds between accounts; and (8) BIA should change its quarterly and annual reporting of estimated losses to coincide with other significant accounting-cycle benchmarks and reports. Financial Management: BIA’s Management of the Indian Trust Funds. GAO/T-AIMD-93-4. September 27, 1993. ABSTRACT: Since April 1991, GAO has testified six times before Congress on the Bureau of Indian Affairs’ (BIA) management of the Indian trust funds and its efforts to reconcile and audit the trust fund accounts. BIA manages about $2 billion in tribal money that has accumulated from payments of claims, oil and gas royalties, land use agreements, and investment income. Over the years, countless audit reports and internal studies have cited a litany of serious problems in BIA’s oversight of these accounts. BIA’s record has been so poor, in fact, that the Office of Management and Budget has placed trust fund accounting on its high-risk list of government programs most vulnerable to waste, fraud, and abuse. This testimony discusses (1) the status of BIA’s efforts to overcome its past problems; (2) problems that still need to be addressed; and (3) provisions in H.R. 1846, the Native American Trust Fund Accounting and Management Reform Act of 1993, that can help BIA resolve some of these matters. Financial Management: Creation of Bureau of Indian Affairs’ Trust Fund Special Projects Team. GAO/AIMD-93-74. September 21, 1993. ABSTRACT: In November 1992, the Bureau of Indian Affairs (BIA) created a Special Projects Team to oversee trust fund management initiatives, including management of the ongoing trust fund account reconciliation project. The team was intended to be temporary, lasting only until the reconciliation project and other trust fund improvements were completed—possibly as long as eight years. This report examines whether BIA, in creating the team, (1) followed Interior Department guidelines, (2) notified Congress and received its approval before transferring money and staff to the team, and (3) submitted reorganization proposals to the relevant Advisory Task Force on BIA Reorganization for consideration. GAO also identifies the officials responsible for creating the team and their present jobs, as well as Interior Department and BIA efforts to investigate the circumstances surrounding the team’s creation. Financial Management: Status of BIA’s Efforts to Resolve Long-Standing Trust Fund Management Problems. GAO/T-AFMD-93-8. June 22, 1993. ABSTRACT: GAO has testified repeatedly on problems with the Bureau of Indian Affairs’ (BIA) management of the Indian trust fund, which includes billions of dollars earned from claims, oil and gas royalties, land use agreements, and investment income. Overall, the Bureau has failed to ensure that proper control and accountability are maintained over each trust fund account. The Bureau’s record has been so poor, in fact, that the Office of Management and Budget has placed trust fund accounting on its high-risk list. This testimony discusses Bureau actions to correct past problems; problems that still need to be addressed; and GAO’s views on S. 925, the Native American Trust Fund Accounting and Management Reform Act of 1993, which mandates many of the improvements spelled out in the Bureau’s own audits and contractor studies. BIA Appropriation Language. GAO/AFMD-93-84R. June 4, 1993. parties to examine and evaluate all pertinent account information and reconcile all claims arising from BIA management of accounts, few audits and reconciliations have been completed; (2) the government needs to fulfill its fiduciary responsibilities and provide account holders with a full accounting regardless of the alternative used to reconcile account balances; and (3) until Interior finds a mutually acceptable basis for determining account balances and associated losses, its proposal for deleting the provision should be rejected as premature. Indian Health Service: Basic Services Mostly Available; Substance Abuse Problems Need Attention. GAO/HRD-93-48. April 9, 1993. ABSTRACT: The five Indian Health Service area offices GAO visited—Aberdeen, Alaska, California, Navajo, and Portland—differed greatly in the way that they delivered health care services. Nonetheless, the areas reported generally similar levels in the availability of basic clinical services. The services most available were treatment services, such as routine prenatal care, and diagnostic services, such as biopsies for cancer diagnoses. Almost all patients seeking such services were able to receive them. Preventive care, such as diabetes education and dental care, was comparatively less available. Service unit officials generally named alcohol and substance abuse services as their greatest unmet health need. Despite recent increases in Indian Health Service funding for alcohol and substance abuse treatment services, the gap between the demand for and availability of services persists. In addition, the Indian Health Service lacks data on alcoholism rates among native Americans and the effectiveness of current prevention and treatment programs. Tribal Management of Mission Valley Power. GAO/RCED-92-282R. September 18, 1992. BACKGROUND: Pursuant to a congressional request, GAO commented on two contracts between residents of the Flathead Reservation in Montana and the Bureau of Indian Affairs for the operation and management of the Mission Valley Power utility. GAO noted that: (1) to meet the contract requirements, the utility provided special personnel, made improvements in the system, gathered quantitative data on electrical power consumption, and developed a long-range plan for electrical consumption; (2) the utility took many steps to comply with federal environmental and safety standards; and (3) nine modifications were made to the 1988 contract and one modification was made to the 1991 contract. Financial Management: Status of BIA’s Efforts to Resolve Long-Standing Trust Fund Management Problems. GAO/T-AFMD-92-16. August 12, 1992. ABSTRACT: This testimony focuses on management of the Indian Trust Funds by the Bureau of Indian Affairs (BIA). GAO discusses (1) some of the long-standing weaknesses that have plagued BIA’s management of the trust funds; (2) the status of BIA efforts to reconcile the trust fund accounts, including the problems that have been identified and alternatives; and (3) the status of BIA efforts to develop a comprehensive strategic plan for trust fund financial management improvement, which include implementing the Chief Financial Officers Act of 1990. Financial Management: Problems Affecting BIA Trust Fund Financial Management. GAO/T-AFMD-92-12. July 2, 1992. BACKGROUND: GAO discussed the Bureau of Indian Affairs’ (BIA) management of the Indian Trust Funds. GAO noted that: (1) BIA has experienced inadequate controls and accountability over many of its trust fund accounts, and the Office of Management and Budget has placed trust fund accounting on its high risk list; (2) although BIA is dependent on accurate and complete land ownership records to properly distribute revenues, audits have shown continued problems with those land records; (3) fractionated interests have impacted BIA maintenance of land ownership records and trust fund accounting, primarily because BIA must account for numerous small transactions; (4) BIA ability to properly account for trust fund monies is impacted by the processes and procedures used by the Minerals Management Service (MMS) to collect, report on, and distribute Indian oil and gas royalties; (5) BIA has difficulty using oil and gas revenue collection and distribution data it receives from MMS to ensure that revenue is credited to the proper accounts, and has developed a computer program to enable it to better analyze this information. GAO believes that, if BIA is to effectively manage the Indian trust funds, it will need to address the problems that impede accurate accounting, including factors outside of BIA control that affect account maintenance, but BIA cannot resolve those problems itself. Indian Issues: GAO’s Analysis of Land Ownership at 12 Reservations. GAO/T-RCED-92-75. July 2, 1992. reservations. GAO discusses (1) the ownership of Indian land; (2) the Bureau of Indian Affairs’ (BIA) work load in maintaining ownership records; and (3) the effect of the Indian Land Consolidation Act on multiple ownership of land tracts by small ownership interests, known as fractionation. GAO also discusses how it used BIA’s computerized land records database to develop the information found in GAO’s report. Tribal Operation of Mission Valley Power. GAO/RCED-92-229R. June 30, 1992. BACKGROUND: Pursuant to a congressional request, GAO commented on two contracts between residents of the Flathead Reservation in Montana and the Bureau of Indian Affairs for the operation and management of the Mission Valley Power utility. GAO noted that: (1) to meet the contract requirements, the utility provided special personnel, made improvements in the system, gathered quantitative data on electrical power consumption, and developed a long-range plan for electrical consumption; (2) the utility took many steps to comply with federal environmental and safety standards; and (3) nine modifications were made to the 1988 contract and one modification was made to the 1991 contract. Financial Management: BIA Has Made Limited Progress in Reconciling Trust Accounts and Developing a Strategic Plan. GAO/AFMD-92-38. June 18, 1992. royalty income has been called into question. Although BIA recognizes the seriousness of the situation, little progress has been made in resolving the problems. GAO recommends that BIA develop a comprehensive strategic plan that will address interfaces between other systems and operations affecting trust fund accounting, such as the land records and reporting by the Minerals Management Service. GAO summarized this report in testimony before Congress; see: Financial Management: Problems Affecting BIA Trust Fund Financial Management, by Jeffrey C. Steinhoff, Director of Civil Audits, before the Senate Select Committee on Indian Affairs. GAO/T-AFMD-92-12, July 2, 1992 (11 pages). Bureau of Indian Affairs: Long-Standing Internal Control Weaknesses Warrant Congressional Attention. GAO/RCED-92-118. May 8, 1992. ABSTRACT: Through its social services program, the Bureau of Indian Affairs (BIA) offers assistance to individual Indians and tribes. GAO found that two of these services, involving payments for welfare and the burial of indigents, are plagued by unjustified, improper, and inconsistent payments and are ripe for fraud and waste. These problems stem from weak internal controls—some as basic as inadequate supervision, failure to separate employee duties, and poor computer security. Similar problems have been repeatedly identified in BIA’s social services program for more than a decade. The long-standing nature of internal control weaknesses and ineffective BIA efforts in the past to correct them indicate that an overall management commitment at all levels will be needed if an effective system of controls is to be established. Recent congressional initiatives to address persistent accounting and internal control weaknesses in BIA’s management of Indian trust funds and the Office of Audit and Evaluation will need management support at all levels if these initiatives are to succeed. To ensure full management support, increased congressional oversight may be warranted. Financial Management: BIA Has Made Limited Progress in Reconciling Indian Trust Fund Accounts and Developing a Strategic Plan. GAO/T-AFMD-92-6. April 2, 1992. reconciliation project, which began last summer, seeks to identify correct account balances for Indian accounts using source documents to reconstruct trust account transactions so that account holders are provided as accurate an accounting as possible. Because many accounts are between 50 and 100 years old, however, the lack of supporting documentation presents a major obstacle. This testimony examines BIA’s progress in reconciling the Indian trust fund accounts and developing a strategic plan for trust fund financial management improvement. Welfare To Work: Effectiveness of Tribal JOBS Programs Unknown. GAO/HRD-92-67BR. March 19, 1992. ABSTRACT: GAO could not assess the effectiveness of Job Opportunities and Basic Skills Training (JOBS) programs run by Indian Tribes and Alaska Native groups or determine outcomes resulting from these programs because evaluation criteria, including well-defined program objectives, were lacking and insufficient, and reliable program data were unavailable. The economic environment in which many Indian tribes and Alaska Native organizations must operate may hinder the success of their Tribal JOBS programs. These programs are assisting participants prepare for and obtain employment at a time when few jobs are available, and unemployment on many reservations is high. In addition to poor economic conditions, tribal organizations mentioned several implementation problems, including a lack of transportation and child care for program participants. Indian Programs: BIA and Indian Tribes Are Taking Action to Address Dam Safety Concerns. GAO/RCED-92-50. February 11, 1992. assessment is made. An effective record-keeping and reporting system to help monitor the situation at priority dams would help BIA assess progress. Land Exchange: Phoenix and Collier Reach Agreement on Indian School Property. GAO/GGD-92-42. February 10, 1992. ABSTRACT: Legislation passed in 1988 authorized the Interior Department to swap its former Indian School property in downtown Phoenix for more than 100,000 acres of land near the Florida Everglades owned by the Collier family along with $34.9 million in cash to set up two Indian trust funds. While most of the exchange conditions set by the law have been met, the City of Phoenix placed limitations on the uses of the Indian School land and the Barron Collier Co. had the right to match the highest bid. As a result, no competing bids for the property were received, and Congress’ intent to test the value of the land by exposing the school site to meaningful competitive bidding was not met. For several reasons, GAO cannot conclude that the Florida land, along with the $34.9 million, equals the value of the Colliers’ portion of the Indian School property. For instance, the Florida land, which was possibly overvalued in 1988, has not been reevaluated since then, and its value could have fallen during the recession. GAO does not question the right of the City of Phoenix to decide how privately-owned property should be used. Yet the city’s action in this case raises questions about whether a locality should have the authority to use zoning as a way of acquiring land in federal disposition programs without compensation to the federal government. Conflict arose during the Phoenix exchange because of efforts by the various entities to meet the intent of the exchange. Such natural conflict raises the issue of how future exchanges can be designed to accommodate the demands of several parties and still meet a market demand test. Indian Programs: Profile of Land Ownership at 12 Reservations. GAO/RCED-92-96BR. February 10, 1992. land administered by the Department of the Interior, (2) Bureau of Indian Affairs’ workload in maintaining ownership records, and (3) act’s impact on the degree of fractionated ownership. BIA Reconciliation Monitoring. GAO/AFMD-92-36R. January 13, 1992. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Bureau of Indian Affairs’ (BIA) efforts to improve its detection and handling of Indian trust fund losses. GAO found that: (1) BIA is liable for investing trust funds above the insured limits of $100,000; (2) the National Credit Union Administration will not cover losses in excess of the $100,000 insurance ceiling; (3) BIA has incurred losses on investments at non-accredited, uninsured credit unions as a result of fraud and criminal activity; (4) the Federal Deposit Insurance Corporation (FDIC) will not cover $121,500 in losses at FDIC-insured institutions; (5) BIA will request approximately $4 million in appropriations to cover credit union and bank losses in its next budget submission; and (6) BIA policies regarding notification and reimbursement to Indian account holders for losses due to BIA errors need to be strengthened because the policies do not address the need for loss prevention and detection systems, adequately instruct staff on how to resolve losses, address documentation requirements, and define whether losses should include interest that was earned but not credited to the appropriate account. Land Exchange: Phoenix Indian School Development Plan Adversely Affects Property Value. GAO/GGD-91-111. July 25, 1991. BACKGROUND: Pursuant to a legislative requirement, GAO analyzed the development proposals and rezoning process for the Department of the Interior’s Phoenix Indian School site in Arizona, focusing on: (1) alternative development plans considered; (2) the plan’s effect on the potential value of the property; and (3) how the plan affects the government’s interests. maximize the amount of city parkland; (4) the City Council’s plan adversely affected the Indian School property’s potential value, since it allows relatively less commercial space than had been granted in past zoning decisions; (5) the government could have realized more than the $80-million minimum price had Phoenix allowed as much commercial development as deemed reasonable by Interior’s contract appraiser and the GAO consultant; and (6) GAO did not estimate the property’s value due to the specific plan’s potentially costly requirements for reducing traffic impacts and improving open space. Indian Issues: Compensation Claims Analyses Overstate Economic Losses. GAO/RCED-91-77. May 21, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the economic analyses supporting the Garrison Unit Joint Tribal Advisory Committee’s (JTAC) recommendation that Indian tribes at the Fort Berthold Reservation and Standing Rock Reservation receive additional financial compensation for land the federal government acquired in 1949 and 1958 for a water resources project, focusing on: (1) the adequacy of the analyses conducted by tribal consultants; and (2) alternative methods of establishing a basis for financial compensation. FINDINGS: GAO found that: (1) the consultants overestimated the tribes’ economic losses, since they made overly optimistic assumptions about the tribes’ economic condition prior to the loss of their land; (2) neither consultant reduced the estimate of additional compensation by the total amount that Congress previously appropriated for the acquired lands; and (3) an alternative approach for considering additional compensation would be to consider the difference between the amount of compensation the tribes believed was warranted at the time the land was taken and the compensation appropriated by Congress. Bureau of Indian Affairs’ Efforts to Reconcile, Audit, and Manage the Indian Trust Funds. GAO/T-AFMD-91-6. May 20, 1991. tribal and approximately 283,000 individual Indian money accounts; (5) BIA made progress in starting the project, but it needed to ensure effective management, accounting, and reporting; (6) BIA still had not finalized its phase I reconciliation management plan; (7) BIA will have to reconstruct old accounts before it can determine an accurate balance; (8) despite the significant potential for incomplete records and the resulting problems due to the outdated accounts, BIA believed that reconciliation work will adequately disclose overpayments and inconsistent investments that resulted in lost interest; and (9) BIA lacked an adequate long-term strategy for keeping the accounts balanced. Indian Programs: Lack of Internal Control at Two Special Law Enforcement Units. GAO/RCED-91-111. May 15, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed two Bureau of Indian Affairs (BIA) law enforcement operations, focusing on their management of: (1) a confidential fund BIA used to pay informants; (2) overtime pay; (3) travel advances; and (4) sensitive equipment. FINDINGS: GAO found that BIA: (1) did not comply with federal requirements regarding controls over appropriated funds and did not follow numerous management procedures; (2) improperly transferred funds to private bank accounts and did not return unobligated funds to the Department of the Treasury at the end of each fiscal year, as required; (3) did not adequately account for and control fund disbursements; (4) did not comply with federal regulations requiring periodic reviews of administratively uncontrollable overtime (AUO) it paid to units and employees; (5) issued excessive travel advances to unit investigators and did not adjust or liquidate the advances, as regulations required; and (6) did not properly control sensitive equipment, such as weapons and surveillance equipment. Indian Issues: GAO’s Assessment of Economic Analyses of Fort Berthold and Standing Rock Reservations’ Compensation Claims. GAO/T-RCED-91-30. April 12, 1991. $411.8 million and $181.2 million to $342.9 million for Standing Rock; (2) the tribes’ economic losses were overstated, because economic consultants made optimistic assumptions regarding the tribes’ economic situation prior to when their land was taken; and (3) the consultants did not reduce their estimates for additional compensation by the total amount that Congress previously appropriated for the land taken. GAO also noted that: (1) three Fort Berthold tribes estimated that their land was worth $9.4 million more than the amount Congress appropriated and the Standing Rock tribe estimated that its land was worth $14.2 million more than the amount Congress appropriated; and (2) the GAO calculated dollar range for the three Fort Berthold tribes was $51.8 million to $149.2 million and $64.5 million to $170 million for the Standing Rock tribe. Bureau of Indian Affairs’ Efforts to Reconcile and Audit the Indian Trust Funds. GAO/T-AFMD-91-2. April 11, 1991. BACKGROUND: GAO discussed the Bureau of Indian Affairs’ (BIA) efforts to reconcile and audit the Indian trust funds. GAO noted that: (1) numerous audit reports have pointed out serious accounting and financial management problems and weak internal controls throughout BIA; (2) the lack of general ledger control over accounts, inaccurate data, the lack of accounting systems documentation, and inadequate management of the Indian trust funds caused numerous accounting errors; (3) the first phase of the BIA trust fund reconciliation and audit project would identify the correct account balances for over 500 tribal accounts and 17,000 individual Indian money trust accounts; and (4) BIA planned to use the first-phase results to develop plans for moving into a second phase that would cover the remaining 1,500 tribal and 283,000 individual Indian money accounts. GAO believes that: (1) legislation may be needed to provide appropriations for monies owed to account holders or relief for unrecoverable overpayments that go back many years; (2) BIA planned to implement the Department of the Interior’s six-part plan to help it control fund accounting transactions, reconcile all account balances, and implement a new Interior-wide accounting system; and (3) BIA must ensure that it carries out its financial responsibilities efficiently and effectively by developing a comprehensive financial management plan for both its appropriated funds and trust fund operations. Indian Programs: Use of Forest Development Funds Should Be Based on Current Priorities. GAO/RCED-91-53. March 7, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Bureau of Indian Affairs’ (BIA) forestry program, focusing on BIA: (1) achievement of its timber harvest goals on commercial Indian timberland; (2) accomplishment of needed forest development; (3) controls over funds disbursement; (4) forestry program staffing since 1977; and (5) efforts to attract Indian foresters. FINDINGS: GAO found that: (1) tribes actively participated in developing multi-year forest management plans, and in planning and approving individual timber sales; (2) BIA experienced problems in keeping forest management plans current due to funding and staffing shortfalls and inability to obtain timely tribal involvement in developing plan components; (3) such factors as market conditions and compliance with relevant federal laws affected the achievement of timber harvest goals; (4) the 1977 backlog of forest development needs was incomplete and imprecise, and failed to include over 300,000 additional acres of needed timber stand improvement; (5) while BIA data indicated that needed forest development had been completed for about one-half of the backlog acreage, data on individual reservation accomplishments were uncertain; (6) dedicated funding failed to address changing development needs because it was still targeted at reducing the 1977 backlog; (7) BIA improved its controls over forest management deduction funds; (8) BIA forestry staff increased significantly since 1978; and (9) BIA had several headquarters and field-level programs to encourage Indians to study and train for the forestry profession. Indian Programs: Navajo-Hopi Resettlement Program. GAO/RCED-91-105BR. March 6, 1991. BACKGROUND: Pursuant to a congressional request, GAO reviewed the Navajo-Hopi Resettlement Program, focusing on: (1) program status; (2) problems faced by relocatees; and (3) Navajos resisting relocation. maintenance or repair problems; (5) less than half of the relocated families moved to off-reservation sites; (6) some of the families who moved to off-reservation sites experienced financial and adjustment problems; (7) 28 percent of the relocated families sold their off-reservation replacement homes primarily because they preferred life on the reservation; (8) to address problems encountered by families who had moved off the reservation, the Office of Navajo and Hopi Indian Relocation issued two program requirements to help families relocate successfully; and (9) the Office continued to work with the Navajo and Hopi tribes to avoid having to forcibly relocate Navajos resisting relocation. Indian Health Service: Funding Based on Historical Patterns, Not Need. GAO/HRD-91-5. February 21, 1991. BACKGROUND: Pursuant to a congressional request, GAO obtained information on: (1) Indian Health Service (IHS) funding distribution methods and the funds allocated for fiscal year (FY) 1980 through FY 1990; (2) per-capita funding for the Oklahoma IHS area; and (3) the effect of IHS funding constraints on health services delivery in Oklahoma, with special attention to the Contract Health Services (CHS) program. FINDINGS: GAO found that: (1) IHS distributed its funding among its 12 service delivery areas based primarily on previous-year funding and rarely used needs-based methods; (2) total IHS funding increased from approximately $517 million in FY 1980 to about $1 billion in FY 1990, and Oklahoma’s funding increased from $59.9 million to $131 million during that period; (3) increased needs-based funding for Oklahoma failed to increase its overall funding share; (4) per-capita funding for Oklahoma Indians was relatively low due to limited needs-based funding and the growing number of eligible Indians in the area; and (5) IHS service delivery was strained in Oklahoma due to substantial increases in demand for outpatient services and rationing of the CHS program. Indian Programs: Tribal Influence in Formulating Budget Priorities Is Limited. GAO/RCED-91-20. February 7, 1991. contracted with BIA to carry out programs; and (4) concerns tribes had regarding the IPS process. FINDINGS: GAO found that: (1) the BIA budget has averaged about $1 billion annually over the past 10 years, with the operation of Indian programs budget component averaging about $850 million a year and the IPS process averaging about $275 million annually; (2) BIA changed various IPS programs based on administrative decisions or legislative directives without notifying area offices or tribes; (3) BIA could not explain why its current guidance provided tribes with a lesser role than earlier guidance in setting IPS budget priorities and funding levels; (4) tribal involvement in the IPS process varied depending on the tribes’ relationship with BIA, changes in tribal leadership, and political situations at the tribes’ reservations; (5) although tribes exercised some control over budget formulation for contracted programs, they characterized their overall IPS involvement as inconsequential; (6) tribes were particularly concerned about the lack of adequate federal funding for their needs; and (7) BIA and tribal officials often cited federal trust responsibilities as a factor limiting tribal involvement in the IPS process. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO published a list and abstracts of its natural resources-related reports and testimony issued from January 1991 through December 1995, regarding such issues as: (1) land and natural resources management; (2) the National Park System; (3) national forests; (4) concessioners; (5) recreation activities; (6) financial management; (7) endangered species and wildlife conservation; (8) wetlands conservation; (9) fisheries; (10) wilderness areas; (11) grazing fees and rangeland management; (12) mine reclamation; (13) oil and mineral production and transport; (14) water resources, quality, and pollution control; (15) timberlands and timber sales; and (16) Native Americans and tribal lands.
Vomitoxin, a toxin associated with a fungal disease called scab, only occurs when scab is present. Since 1993, scab and vomitoxin have affected wheat and barley crops in the Northern Great Plains, which includes North Dakota; Minnesota; South Dakota; and Manitoba, Canada. Crops in the Red River Valley region (the eastern part of North Dakota, the western part of Minnesota, and a corner of northeast South Dakota) have been the most severely affected. The mold that produces vomitoxin grows primarily on grains, particularly on wheat and barley, and can cause vomiting in farm animals that ingest vomitoxin-contaminated feed grains. The Food and Drug Administration (FDA), which is responsible for ensuring food safety in certain foods—including grains—has not issued any guidance on vomitoxin in barley or barley products. However, it has issued advisory levels for vomitoxin in wheat and wheat products and feed grains for animals. The scab and vomitoxin epidemic has added to the financial stress of farmers in North Dakota and the rest of the Northern Great Plains. North Dakota suffered a drought in 1988 and floods in 1993 and 1997. The U.S. Department of Agriculture’s (USDA) Farm Service Agency (FSA) estimates that in the barley-producing regions of North Dakota most affected by scab and vomitoxin, 768 (or about 14 percent) of the farmers stopped farming between 1996 and 1998. Although this figure includes farms that failed because of flood, drought, and other reasons, FSA officials stated that scab and vomitoxin were the primary reasons for leaving farming. Barley is economically important to North Dakota agriculture. Traditionally, it is second only to wheat in acreage planted and total crop income. For example, in 1992, the last year before the scab and vomitoxin epidemic, North Dakota’s farm income from all crops totaled $2.2 billion, of which $1.2 billion (about 54 percent) was from wheat and $237 million (about 11 percent) was from barley. Furthermore, for the last 50 years, North Dakota has been the leading barley producer in the United States; in 1997, it accounted for 27 percent of the nation’s total barley production. Most farmers sell their barley to grain dealers, who then resell it to maltsters and brewers. To determine the price they will offer farmers for their barley, including the need for discounts, grain dealers have the barley tested for vomitoxin. Most testers of vomitoxin in North Dakota use a test kit called Veratox because it is relatively quick, inexpensive, and practical for commercial use. The high pressure liquid chromatography (HPLC) and gas chromatography (GC) tests, which are used by researchers for purposes such as advancing research on vomitoxin, are also used by maltsters and USDA’s Grain Inspection, Packers and Stockyards Administration (GIPSA) to check Veratox test results. These two reference methods are generally not used by commercial testing facilities and grain dealers because they are more costly, time-consuming, and complex to operate. GIPSA is the USDA agency that oversees federal grain inspections and has several key associated responsibilities. It authorizes certain commercial testing facilities to perform tests following its official procedures and standards. It also approves various testing methods, such as the Veratox kit, for use by these authorized facilities. Approved test methods, for which GIPSA provides training, must meet the agency’s performance criteria. GIPSA also monitors the consistency of test results across its authorized facilities. For example, GIPSA conducts quarterly reviews of the test results from its authorized testing facilities. For these reviews, GIPSA uses the HPLC test method as a reference for, or check on, test results from these facilities. GIPSA considers the scab and vomitoxin epidemic to be a serious problem and has taken actions to address vomitoxin testing issues, such as conducting a study in 1998 to assess the extent to which sampling methods can affect vomitoxin test results. However, GIPSA oversees only a portion of commercial grain testing nationwide. Commercial testing facilities unaffiliated with GIPSA and large grain elevators where in-house testing with the Veratox kit is cost-effective also perform vomitoxin testing. The North Dakota Barley Council estimates that 40 percent of commercial vomitoxin testing in North Dakota occurs at GIPSA’s authorized facilities; the remaining 60 percent occurs at either the unaffiliated testing facilities or large grain elevators. GIPSA has no oversight responsibility for vomitoxin tests performed by these other entities. Currently, GIPSA has four authorized agents in North Dakota that operate six commercial testing facilities. In addition, North Dakota has about nine commercial testing facilities that are not affiliated with GIPSA and between 12 and 20 grain elevators that test for vomitoxin. From 1993 through 1997, we estimate that North Dakota barley farmers suffered cumulative revenue losses from scab and vomitoxin of about $200 million (in 1997 dollars) —equal to almost 17 percent of the $1.2 billion in total barley revenues farmers received during this period.The losses from these diseases varied significantly, both over the years and across the regions of the state, with the Red River Valley suffering the greatest losses. However, crop insurance payments for scab- and vomitoxin-damaged barley covered only a very small portion, less than 2 percent, of these cumulative losses. U.S. maltsters and brewers, the traditional buyers of North Dakota’s malting barley, have reacted to scab and vomitoxin by expanding their imports of malting barley from Canada by about 380 percent. From 1993 through 1997, we estimate that North Dakota farmers lost about $200 million (in 1997 dollars) in revenues as a result of declines in both production and price discounts. These losses were equal to almost 17 percent of the $1.2 billion in total revenues barley farmers received during these years. About 70 percent of these losses, or $139 million, were from reduced barley yields (in bushels per acre) and from farmers’ leaving more barley unharvested. For example, between 1992 and 1997, average North Dakota barley yields dropped from a pre-disease level of 65 bushels an acre to 45 bushels an acre. Also, as shown in figure 1, from 1993 through 1997 (the years of the epidemic), the number of acres planted with barley fell from 2.9 million to 2.4 million and the number of harvested acres of barley fell from 2.4 million to 2.25 million. Differences between the amount of acres planted and actually harvested were the largest in 1993 and 1996. For instance, in 1993, North Dakota farmers harvested about 500,000 fewer barley acres than they had planted. Acres (000) Price discounts for barley contaminated with vomitoxin also played a key role in reducing farmers’ revenues. From 1993 through 1997, price discounts because of vomitoxin accounted for about 30 percent, or $61 million, of total revenue losses. The relationship between vomitoxin and price discounts is complex. Discounting in the marketplace stems from the U.S. brewing industry’s desire to use little or no vomitoxin-contaminated barley. In general, U.S. brewers send price signals that reflect their specific quality and quantity requirements to merchandisers and maltsters. These price signals are subsequently incorporated into price discount schedules that reflect buyers’ reluctance to purchase barley with vomitoxin unless they receive a highly discounted price. Grain elevators use these schedules, in conjunction with other quality premium or discount factors, to determine an overall price quote to farmers. Price discount schedules for barley vomitoxin can change over time, sometimes on a daily basis, depending on market conditions. Price signals for malting barley come largely from the four large firms that dominate the U.S. brewing industry. One of these firms represents nearly half of the market. A limited number of buyers in a given industry, such as the brewing industry, can influence the market price for a given commodity. According to industry experts, although vomitoxin can cause excessive foaming during the malting process and in finished beer products, brewers require discounts for malting barley primarily because they are concerned about the potential for a negative public perception of beer containing vomitoxin. The industry is concerned that consumers will switch brands or purchase other alcoholic beverages if it is reported that beer contains vomitoxin. As a result, brewers are willing to pay top prices for vomitoxin-free barley, but only highly discounted prices for barley contaminated with vomitoxin. Table 1 shows an example of a price schedule for barley, incorporating discounts for different levels of vomitoxin. Although discounting strategies vary, grain dealers generally begin discounting the price of vomitoxin- contaminated barley at 0.6 ppm. This first discount, usually the largest of several, ranges from about 40 cents to 60 cents a bushel. As shown in table 1, this first discount would result in a price of about $2 per bushel. Grain dealers apply subsequent discounts of about 5 cents to 15 cents for concentrations of vomitoxin that range from 1.1 ppm to 3.0 ppm. At vomitoxin concentrations above 3.0 ppm, dealers generally purchase barley as feed grain, which receives the lowest price, about $1.75 per bushel. The American Malting Barley Council reported that for 1997 only 9 percent of all midwestern malting barley had a vomitoxin level that fell into the premium price category of 0.5 ppm or less. Along with steep price discounts, vomitoxin has had the effect of shifting the amount of malting versus feed grain barley produced in North Dakota. In the years before the scab and vomitoxin epidemic, the largest part of the state’s barley production, and hence barley revenues, came from premium-priced malting barley—60 to 70 percent of all North Dakota barley sales. However, since the scab and vomitoxin epidemic, this trend has changed. Specifically, in several years during 1993 through 1997, many regions of North Dakota sold over 50 percent of the barley produced to the lower-valued feed grain market. While scab and vomitoxin have reduced North Dakota barley farmers’ revenues, the amount of loss has varied by region and year. As seen in table 2, the most severely affected area in North Dakota in terms of total revenue losses has been in the upper Red River Valley—the East Central and Northeast regions of the state—while the Southeast region has been least affected. Barley farmers suffered their greatest losses overall from vomitoxin in 1993 and in 1997, with losses of $62 million and $68 million, respectively. However, as the table shows, some regions—because they were less affected by vomitoxin and thus had more premium quality malting barley to sell—had a small increase in revenues in certain years. Because the scab and vomitoxin outbreak has reduced the supply of high- quality malting barley in the Northern Great Plains, the traditional purchasers of North Dakota malting barley—U.S. brewers and maltsters—have increasingly turned to Canadian and other U.S. sources. Figure 2 illustrates the increase in Canadian barley production and the increase in exports of malting barley to the United States. During the years of the scab and vomitoxin epidemic, average annual Canadian exports of malting barley to the United States increased by about 380 percent. From 1993 through 1997, average annual barley exports from Canada reached 705,000 metric tons, compared to 147,125 metric tons from the pre-epidemic years of 1985 through 1992. In addition, to meet the increased U.S. demand for premium quality malting barley, Canadian production of malting barley grew from 1 million metric tons in 1993 to 2.2 million metric tons in 1997. Agriculture Canada reported in 1997 that the United States has been Canada’s largest market for malting barley over the past 4 years because of the shortage of quality U.S. malting barley. And, in 1997, malting barley imports from Canada represented over 25 percent of all malting barley consumed by the U.S. brewing industry. In comparison, from 1988 through 1992, malting barley imports from Canada represented about 5 percent of all malting barley consumed by the industry. Although scab and vomitoxin have decreased North Dakota barley farmers’ revenues, Canadian imports have somewhat moderated the blight’s impact on U.S. brewers and maltsters. Shortages of malting barley in the United States as a result of these diseases would normally tend to increase U.S. malting premiums and prices, but these increases have been tempered by the large imports of Canadian malting barley. That is, even with a smaller domestic supply of malting quality barley, larger Canadian imports produce competitive pressures to keep prices below the levels they would be if imports were not part of the U.S. malting barley market. According to testing experts, while the Veratox test kit serves the market’s need for a relatively fast and cost-effective method for measuring vomitoxin in barley, it can produce test results that vary, particularly at concentrations critical to pricing decisions. Testing experts state that this variability can be reduced to some extent through quality assurance measures and training. Testing experts believe the HPLC and GC tests produce more accurate and consistent results, in part because they are conducted under controlled laboratory conditions. However, because of their complexity and cost, these tests are not practical for commercial use. Testing experts state that all tests for vomitoxin, including Veratox, experience variability in test results, particularly at the upper and lower limits of the test’s ability to measure vomitoxin. This variability at the Veratox kit’s lower limits of measuring vomitoxin can affect whether barley farmers receive a price discount. According to the manufacturer of the Veratox kit, the kit’s lower limit of measurement is 0.5 ppm. At this concentration, Veratox test results can range from 0 ppm (where barley receives no discount) to 1.1 ppm (where barley would incur a substantial price discount). The market, therefore, is making crucial pricing decisions at concentration levels where the Veratox kit has substantial variability. Our analysis of selected test data results supports expert opinion regarding Veratox’s variability. To conduct this analysis, we compared 1,068 Veratox test results between (1) Neogen (the manufacturer of Veratox) and a North Dakota commercial grain testing facility and (2) these two facilities and GIPSA’s in-house HPLC reference method. We found that Veratox test results on the same samples of barley varied between Neogen and the commercial grain testing facility. Specifically, assuming that the HPLC test results represent the true concentration of vomitoxin, we found that at concentrations between 0.7 and 4 ppm, Neogen’s estimate of vomitoxin levels was, on average, higher than the testing facility’s. Consequently, farmers could have received different prices from each testing location, had the test results been the basis for a commercial sale. For example, we found instances in which, at a HPLC vomitoxin concentration of 1 ppm, the manufacturer’s test results measured 1.1 ppm vomitoxin or greater, while the testing facility’s measured 0.4 ppm. Had these results been the basis for a sale, a producer would have received $1.85 per bushel or less from the manufacturer but $2.55 per bushel from the testing facility. (See table 1 for an example of a price schedule for barley.) Testing experts said that test methods have two types of variability—inherent and systemic. Inherent variability exists in all vomitoxin test methods and increases at the higher and lower limits of a given test method’s ability to measure vomitoxin levels. Experts state that this variability cannot be controlled, which is why it is called inherent. The inherent variability of the Veratox test may affect barley buyers and farmers differently. According to GIPSA officials, grain elevators, which purchase barley from farmers and sell it to maltsters and brewers, may be less affected because they handle larger volumes of barley, with a correspondingly greater number of test results. Thus, the prices based on test results that were too high or too low—because of the inherent variability associated with the test kit—could counterbalance each other. As a result, grain elevators may be less affected by variable test results than barley farmers, who receive prices based on fewer test results. Because farmers may be more affected, some testing experts believe that if price discounts were started at 1 ppm, rather than at 0.5 ppm (the lower testing limit of the kit), farmers could receive more equitable test results. Some cereal scientists told us that no appreciable increases in beer production problems occur when brewing with barley having vomitoxin concentrations of 1 ppm versus 0.5 ppm. However, U.S. brewers and maltsters we talked to had varying opinions on whether beer production problems would increase at concentrations of 1 ppm. Systemic variability, which refers to differences in how testers obtain and process grain samples and conduct tests, can also affect test results—for Veratox as well as for other testing methods. For example, a Veratox test involves many actions—selecting and processing the grain sample, extracting the vomitoxin from the grain, and measuring the vomitoxin. Furthermore, the test equipment must be maintained and cleaned in order to achieve optimal results. Experts said that, because the potential exists for mistakes at each stage of the process, the accuracy of the kit’s results is affected by the skill of the technician using it. For all testing methods, a number of actions—including training and quality assurance efforts—can be used to reduce systemic variability. First, test results from grain samples known to have vomitoxin can be compared across various testing facilities. This method, often referred to as a “check-sample program,” helps ensure that testing facilities will achieve consistent test results. GIPSA’s offices and its authorized testing facilities use this approach. Specifically, GIPSA sends samples of barley or wheat with known concentrations of vomitoxin to its authorized facilities for testing. GIPSA then compares the test results to determine if all the facilities are measuring about the same amount of vomitoxin. GIPSA officials believe that their check-sample program helps keep vomitoxin test results consistent among its testing facilities. Second, testing experts stress the importance of using “quality assurance (QA) pools” to reduce systemic variability. QA pools consist of samples of naturally contaminated barley that a testing facility has tested many times in order to identify the true amount of vomitoxin in the sample. Testing facilities that practice quality assurance using QA pools will run tests on a pool in conjunction with daily vomitoxin tests. If a test on the QA pool detects an amount of vomitoxin that differs significantly from the known amount of vomitoxin in the pool, technicians are alerted that the tests on other samples also may be incorrect. Finally, testing experts said that the training of the technicians who conduct the tests is critical for obtaining optimal test results. GIPSA, for instance, provides Veratox training to all personnel who work at GIPSA-authorized testing facilities in North Dakota. However, GIPSA does not oversee the training given to other commercial grain testing facilities. Neogen, the Veratox kit’s manufacturer, also provides training to new customers. According to testing experts, the HPLC and GC testing methods are widely accepted among analytical chemists for providing accurate and consistent results. For example, the Association of Official Analytical Chemists has approved a GC method and reviewed an HPLC method; and the American Society of Brewing Chemists has approved a GC method for industry use. In addition, the HPLC and GC methods are sometimes used to assess the performance of commercial test kits, including Veratox, because these chromatographic methods, according to testing experts, have less variability in their test results. For instance, GIPSA evaluates the performance of any new commercial test kit against its HPLC reference method before permitting its use by GIPSA employees and GIPSA-authorized testing facilities. Furthermore, GIPSA uses the HPLC method in its check-sample program. While these reference methods have less variability than Veratox, they are not practical for use at commercial testing facilities and grain elevators for several reasons, according to experts we spoke with. First, the procedures for preparing and testing the vomitoxin samples for these methods take several hours to complete. However, during the barley harvest, farmers typically deliver their barley to grain elevators by trucks that must unload and return to the fields for other loads. Because of the need for quick turnaround, the farmers, elevators, and truck drivers cannot wait several hours for a vomitoxin test to be conducted. In comparison, the Veratox test takes about 30 minutes to conduct. Second, the HPLC and GC methods require thousands of dollars in equipment investments. For example, HPLC and GC test equipment cost between $40,000 to $60,000 to purchase, while the Veratox test equipment costs about $3,200. In barley, scab, and the vomitoxin resulting from scab, can be reduced somewhat through the use of fungicides and certain farming practices, such as crop rotation and deeper tillage of the soil. However, costs and other factors limit the usefulness of these actions, and their impact is minimal when the infestation is severe. In addition, varieties of barley that are more resistant to scab and vomitoxin will not be commercially available for at least 6 years. According to cereal scientists, improved barley varieties combined with short-term actions may eventually help some farmers to better manage scab and vomitoxin infestations, thereby reducing farmers’ financial losses. However, it is unlikely that vomitoxin will be completely eliminated in the foreseeable future. According to North Dakota extension agents and cereal scientists, a number of short-term actions can help farmers reduce scab, and thus vomitoxin concentrations, in barley. First, crop rotation—changing the type of crop planted each growing season—enriches the nutrients in the soil and decreases the incidence of crop disease. Although most farmers rotate crops routinely, the inclusion of more broadleaf crops in a rotation is likely to help decrease the levels of scab in the soil. Broadleaf crops, such as sunflowers, canola, and sugarbeets, are not as susceptible to scab as cereal grains, such as barley and wheat. However, even if rotation initially helps reduce scab levels, infestation could occur from airborne spores from other locations. Furthermore, other problems could discourage the use of crop rotations: (1) some broadleaf crops (such as sugarbeets) require costly equipment and costly contractual agreements and (2) many broadleaf crops cannot be grown in certain parts of North Dakota, thereby limiting the number of crops that can be included in rotations. For example, some farmers in north central North Dakota cannot easily grow beans because the climate is generally too cold and the growing season is too short. As a result, these farmers have shorter rotation cycles and are forced to more quickly return to crops (such as barley and wheat) that are highly susceptible to scab. Second, deep tilling to completely overturn the soil—which does not occur with conventional tilling—could reduce scab levels. Since scab stays through the winter in infected crop stubble, tilling deeper into the soil buries any infected residue and can help prevent scab from spreading to the next year’s crop. However, deep-till practices result in less moisture in the soil, causing farmland to become more prone to wind and water erosion, and are therefore not practical for farmers in the drier portions of North Dakota (such as the western portion of the state). Deep tilling also requires farmers to purchase more expensive tilling equipment. Furthermore, as with crop rotation, infestation can occur from airborne spores if even one scab-infected farm in an area does not use deep tilling. Thus, for optimal effectiveness, deep tilling has to be conducted across many farms. Third, applying fungicides can help reduce vomitoxin. However, fungicides are not always reliable because of weather conditions and the difficulties associated with applying them. North Dakota farmers primarily use two types of fungicides, protectant and systemic. Protectant fungicides (which cover the plant externally) have been used for a number of years and are easily washed off by rain and degraded by sunlight. Systemic fungicides, which are newer, get absorbed into the barley plant within 4 to 8 hours of application and are not affected by sunlight or water. However, the timing of the application of both systemic and protectant fungicides is critical. They must be applied immediately after the barley flower blossoms because a new flower can become infected with airborne scab spores within 3 to 4 days. Once the barley flower is infected with the scab fungus, the fungus has the potential to produce vomitoxin. In addition, a farmer can expect to spend between about $90,000 and $138,000 to spray a 3,000-acre barley crop with a fungicide. Thus, in deciding whether to use fungicides, farmers must compare the costs they will incur in applying them with the higher price they could receive if their barley is less contaminated with vomitoxin. North Dakota extension agents told us that using the deep-till and rotation farming practices with fungicides increases the overall effectiveness of these short-term actions in reducing scab and lowering vomitoxin levels. However, they also noted that if airborne scab spores are widespread and weather conditions are favorable to fungal growth, barley crops would still become contaminated. Thus, they believe that these short-term actions will be effective only in years of light infestation. North Dakota State University, the University of Minnesota, South Dakota State University, and Busch Agricultural Resource, Inc., began a cooperative breeding effort to develop more scab-resistant barley in 1994. The four institutions exchange and test potential new varieties of barley. They also share information about new barley varieties that show resistance to scab and vomitoxin. In March 1997, a U.S. Wheat and Barley Scab Initiative was formed by scientists, members of the wheat and barley industries, commodity groups, and others to call national attention to the scab problem and to set national priorities for scab research. In fiscal year 1998, the Congress appropriated $500,000 to USDA to fund the scab research plan established by the leaders of the initiative; in fiscal year 1999, an additional $3 million was appropriated for the effort. Several of the research areas focus on developing more resistant varieties and assessing the effectiveness of fungicides in combating scab. Although USDA’s Agricultural Research Service (ARS) is funding the initiative, scientists at state land grant universities, including North Dakota State University, will perform most of the research. According to barley breeders and farming experts, because of many scientific and commercial requirements, it takes about 8 to 10 years to breed, test, and release a new variety of barley. The breeding process includes several steps. First, a breeder must identify the genetic characteristics that could make the barley more resistant to vomitoxin. Second, these characteristics need to be combined and strengthened through successive new generations of barley varieties. Third, new varieties must be tested under multiple environmental conditions to ensure that they are truly resistant. During the breeding process, new varieties may sometimes appear to be resistant to scab when, in fact, they are not. For example, if a greenhouse containing a new variety being tested for resistance is kept cool and limited moisture is allowed to accumulate on the barley, little scab will grow. This may lead the breeder to believe that the variety is scab-resistant, while, in fact, the greenhouse environment suppressed scab growth. Fourth, after a breeder is confident that new varieties are truly resistant, they must be tested and screened for necessary malting and brewing qualities. For example, a new variety of barley must be uniform in size and have plump kernels (necessary for successful beer brewing) or maltsters and brewers will not be interested in buying it.According to scientists, while some more resistant barley varieties are currently undergoing commercial trials by maltsters and brewers, none contain all of the characteristics that the industry requires. Lastly, new barley varieties must be tested for commercial viability. Any new variety of barley that meets the malting and brewing industry’s requirements would also have to be high-yielding in order for it to be commercially attractive to farmers. Scientists estimate that a commercially acceptable, more scab-resistant barley variety is at least 6 years away. Breeders expect that, over time, new, more resistant barley, combined with short-term actions may help farmers to better manage scab and vomitoxin infestations and reduce their financial losses. However, these experts state that a more resistant barley variety will not completely eliminate the incidence of scab and vomitoxin, particularly during periods of moderate or severe infestation. We provided a draft of this report to USDA for its review and comment. We met with the Deputy Administrator, Grain Inspection, Packers and Stockyards Administration, and with other officials from that organization and USDA’s Agricultural Research Service. The officials generally agreed with the information presented in the report and provided several technical changes and clarifications. We have incorporated these changes as appropriate. You asked us to (1) determine the financial impact of scab and vomitoxin on North Dakota barley farmers, (2) assess the performance of vomitoxin test methods, and (3) identify short- and long-term actions that could help reduce the impact of scab and vomitoxin on North Dakota barley farmers. To address the first question, we collected and developed historical data on North Dakota barley prices and production for 1959 through 1992—the period before the scab and vomitoxin epidemic—and on key weather factors affecting production for both that period and the blighted years. We used these data to estimate (1) what barley prices and production would have been in 1993 through 1997 in the absence of scab and vomitoxin and (2) what revenues would have been in the absence of scab and vomitoxin. We then compared this estimate of revenues with actual barley revenues to determine farmers’ losses by year and by crop reporting district. We also developed information on how prices are transmitted from the maltsters and brewers down to the farmers, and collected data on Canadian production and exports of malting barley to the United States during this time period. To conduct these tasks, we used data from the North Dakota State University, GIPSA, the North Dakota Department of Agriculture, USDA’s National Agricultural Statistics Service and its Economic Research Service, the North Dakota Barley Council, and Agriculture Canada. We also conducted interviews with officials from these organizations and with North Dakota grain dealers. (See app. I for a detailed description of our data sources, methodology and the results of our analysis.) To address the second question, we reviewed GIPSA, industry, and academic studies on the test methods; interviewed testing experts; and analyzed Veratox test data on vomitoxin from GIPSA’s 1998 Sampling Variability Study. Using data from the study, we assessed the performance of vomitoxin test results on the basis of the variability of test results between testing facilities. Testing experts we spoke with included officials at GIPSA, FDA, and major U.S. malting and brewing companies; academic researchers; and representatives of the Association of Official Analytical Chemists, the American Society of Brewing Chemists, the American Malting Barley Association, the North Dakota Barley Council, and the North Dakota Grain Dealers Association. (See app. II for a detailed description of our methodology and the results of our analysis.) To address the third question, we (1) obtained information on academic, public, and private research on actions to reduce the impact of scab and vomitoxin and on progress in developing more scab-resistant barley and (2) interviewed scientists at North Dakota State University and the University of Minnesota and officials at USDA’s Economic Research Service and Agricultural Research Service. Finally, we had a draft of this report reviewed for accuracy and objectivity by several economists and agricultural experts from academia. We did not independently verify the data obtained from our sources. Our work was conducted from April 1998 through February 1999 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will provide copies of this report to Chairman Richard Lugar and Ranking Minority Member Tom Harkin of the Senate Committee on Agriculture, Nutrition, and Forestry; Chairman Larry Combest and Ranking Minority Member Charles Stenholm of the House Committee on Agriculture; other interested congressional committees; and the Honorable Dan Glickman, the Secretary of Agriculture. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-5138. Major contributors to this report are listed in appendix III. This appendix explains the methods and data we used to estimate the revenue losses for North Dakota barley as a result of the scab and vomitoxin epidemic for 1993 through 1997. To develop this estimate, we first estimated what barley revenues would have been in the absence of the vomitoxin epidemic in North Dakota. This required estimating what production levels and prices would have been in each district in each year. In turn, estimating production levels required estimating both yields and the ratios of harvested-to-planted acres in the absence of the disease. We then compared estimated barley revenues without the disease to actual barley revenues received, which we calculated from price and production data, to obtain estimated losses. We then totaled all the crop reporting districts and all the years to obtain an estimate of total losses during this period. To estimate losses resulting from scab and vomitoxin, we first estimated what barley revenues would have been during this time period if the epidemic had not occurred, but all other relevant factors (such as weather) had been unchanged. We estimated both production levels and prices and multiplied them to obtain estimated revenues. As a first step in estimating production, we used a regression analysis to estimate barley yields from 1959 through 1992 (before the scab and vomitoxin epidemic) for region i in time period t as a function of weather events and a time trend: (1) yit = harvested yield in region i in year t Pit = the difference between average total precipitation and total precipitation during the growing season divided by the standard deviation of total rainfall for region i and year t = the squared value of Pit , the precipitation deviation variable Tit = the difference between historical average temperature during the growing season and average temperature during the growing season divided by the standard deviation of average temperature for region i, year t t = a time trend variable, t=1,...,34 In this regression, we transformed both average growing season temperature and total rainfall to measures of deviations by subtracting their historical average levels from their actual levels and dividing by their standard deviations. As a result, these variables measure how close a particular year’s average temperature or total rainfall is to its historical average. For example, values greater than +1 are associated with hot weather or wet months; values less than –1 are associated with dry or cool months; and values between +1 and –1 are near the average. We used these transformed weather variables in the regression rather than the actual values because they were more significantly related to yield and contained less multicollinearity. In addition, because there is an optimum level of precipitation, beyond which yields may decrease, we included a squared precipitation term in our equation. Other agricultural economists analyzing yield have also used squared precipitation terms. Finally, we inserted an annual time trend to represent yield changes because of changes in such things as technology, input use, or farm size. Table I.1 displays our estimates of the parameters of these regression equations for each CRD analyzed. Except for Pit in CRDs 3 and 5, all independent variables were significant at the 0.05 level and above and displayed the expected signs. Table I.1: Barley Yield Equation Parameter Estimates by Crop Reporting District 24.43(8.48) 21.28(7.73) 27.20(10.01) 26.43(9.66) (2.72) 5.37(4.25) 2.96(2.33) 5.49(3.56) (–1.65) (–1.34) –2.27(–2.14) –3.60(–3.83) –3.41(–2.47) –4.56(–3.44) –3.57(–2.67) –2.87(–2.07) 1.15(9.00) 0.93(7.02) 1.17(9.09) 0.92(7.01) Indicates error structure corrected for first order autocorrelation. We also performed a Chow test to determine whether barley yields were homogeneous across CRDs and, thus, if we could pool all of our data into one regression equation. This hypothesis, however, was rejected at the 0.05 level, and we therefore used our yield estimates from the regressions of the separate CRDs in our analysis. In equation 2, we calculated yield in the absence of scab and vomitoxin as a weighted average of predicted yield from equation 1 and actual yield. (2) In equation 2, ynit denotes yield in the absence of scab and vomitoxin, yfit the predicted yield from equation 1, and ysit the actual yield in a scab-infected year. The fraction of yield shortfall attributable to scab and it. If vomitoxin were the only factor accounting for vomitoxin is denoted a a shortfall during the scab-infected years, then a = 1 and ynit = yfit ; that is, the yield that would have occurred in the absence of the disease equals the predicted yield from equation 1. In addition to estimating yield in the absence of scab and vomitoxin, we needed to calculate the ratio of harvested-to-planted acres to estimate barley production. During the years of the epidemic, many acres that were planted to barley actually went unharvested. Because the ratio of actual harvested-to-planted acreage during the scab-infected years might have differed from the predicted ratio for reasons other than scab and vomitoxin, we again used a weighted average of the predicted and actual ratios to estimate the ratio in the absence of the disease. We used past values of the ratio of harvested-to-planted acreage as the predicted values, but we used the same a values to measure the fraction of yield shortfall resulting from scab and vomitoxin. Specifically, in equation 3, we calculated the ratio of harvested-to-planted acres to account for acreage that was left abandoned because of scab and vomitoxin for each region for each time period as:(3) ahit = actual harvested acres in time period t in CRD i apit = actual planted acres in time period t in CRD i R = the average of the ratio of harvested-to-planted acres, 1983-92 Rnit = the ratio of planted-to-harvested acres, in the absence of vomitoxin it = the same adjustment factor used to calculate yield without vomitoxin Finally, we combined our estimates of yield and the ratio of harvested-to-planted acreage in the absence of vomitoxin to estimate production in the absence of vomitoxin, qnit: (4) In order to estimate production in the absence of scab and vomitoxin, without overestimating losses, we used the maximum of estimated yield in the absence of vomitoxin and actual yield and the maximum of the calculated ratio of harvested-to-planted acres without vomitoxin and the actual ratio. For example, if the estimated yield falls below actual yield in a scab year, actual yield would be used instead of the estimated yield (without scab/vomitoxin) to estimate production. The product of the second term and acres planted, apit, equals harvested acres in a year without the presence of scab and vomitoxin. As the next step in determining barley revenue in the absence of vomitoxin, we estimated both malting barley premiums and feed grain prices for 1993 through 1997 had there been no disease. To do this, we used regression analysis and historical data on price and production from 1959 through 1992 to estimate price equations for both malting barley premiums and feed grain prices. First, we explain malting premium price movements by total barley production or its relationship to the larger national barley market. Since the proportion of malting barley in the entire crop was fairly stable in the years prior to the vomitoxin epidemic, increases in total barley production translate into increases in the quantities of malting barley. Moreover, while there are differences in premiums from region to region, prices are generally transmitted from the malting and brewing industries at a more aggregate market level. Therefore, in equation 5 we specify the historical association between malting premiums, Pi and total U.S. barley production, QT, for each CRD analyzed, i: (5) Table I.3 shows the results of this analysis. Table I.3: Malting Barley Premium Parameter Estimates by Crop Reporting District 0.88(3.68) 1.42(6.16) 1.07(4.48) 2.05(6.85) 1.07(4.23) –0.0015(–2.78) –0.0026(–5.29) –0.0018(–3.54) –0.0039(–6.07) –0.0018(–3.18) As table I.3 shows, we found a negative and highly significant association between malting premiums and total barley production at the national level for all CRDs. We also tried other variations of this regression model, including ones using combinations of stocks as well as barley yields for independent variables. However, these variables did not perform as well as the total barley production variable. Because of the presence of positive serial correlation in all CRDs, we used the Yule-Walker regression technique to derive our estimates. In general, serial correlation causes standard errors to be biased downward, thus indicating that parameter estimates are more precise than they actually are. Therefore, correcting for this problem leads to more efficient parameter estimates. In the feed grain market, corn is the primary feed grain product, accounting for more than 80 percent of total feed grain consumption. Because barley feed grain prices, P , are driven primarily by corn prices, in equation 6, we specify the historical association between feed grain barley prices, the price of corn, PC , and total U.S. barley production, QT , as: (6) To correct for first-order serial correlation, as in the malting premium regression models, we used the Yule-Walker regression technique for the feed grain regressions. As table I.4 indicates, the total barley production variable displayed a negative sign and was significant at the 0.10 percent level and above in all CRDs except 6. In all CRDs, the price of corn was positively related to barley feed grain prices and highly statistically significant. Table I.4: Feed Grain Barley Parameter Estimates by Crop Reporting District (1.19) (1.48) (1.19) (1.13) (1.04) 0.78(17.75) 0.75(18.18) 0.77(19.81) 0.75(17.42) 0.78(17.49) –0.0009(–2.07) –0.0008(–2.10) –0.0007(–2.00) (–1.39) –0.0007(–1.76) Substituting in actual values of barley production and corn prices for years 1993 through 1997, we used these regression parameters to predict what malting barley and feed grain barley prices would have been in the absence of the vomitoxin epidemic for these years. We assume that malting barley prices are the sum of estimated feed grain prices plus estimated malting premiums. As the final step in estimating barley revenue in the absence of vomitoxin, we combined our previously obtained estimates of production and prices without the disease to obtain revenue as the product of production and price. However, since barley production data are only for total production, and are not separated out for the malting and the feed grain markets, we first needed to allocate total production to these markets. We derived the proportion of the crop sold as malting barley and feed grain barley by using actual data on the prices of malting barley, PM , feed grain barley, PF, and the total average barley price, PB . Because the overall price of barley, PB , is a weighted average of the malting and feed grain price, usingequations 7 and 8, we can obtain the proportion of barley that is sold to the malting market, nbarmarket (1 - nbar) as equation 7 shows: (7) Rearranging terms, we can express the proportion of barley sold to the malting market as a function of observed prices as equation 8 shows: (8) To estimate the amount of production that would have gone to the malting barley and feed grain markets for each district in each year from 1993 through 1997 in the absence of vomitoxin, we multiplied these weights by our estimate of total barley production (without vomitoxin). For instance, in order to account for the amount of barley that typically went into the malting side of the market for CRD i in year t, we multiplied nbaraverage proportion of malting barley for that CRD, by the estimated production in that district in year t in the absence of the disease, qnit (from equation 4). Finally, to estimate malting barley revenue for years 1993 through 1997 in the absence of vomitoxin, we multiplied the estimated malting barley production for each district and year by the predicted malting barley price (in the absence of vomitoxin), nPand year. We used the same procedure to estimate the revenue for all CRDs for the feed barley market. Equation 9 summarizes how we estimated total barley revenue in a particular district and year in the absence of vomitoxin (NREVit): (9) = proportion of malting barley production without vomitoxin for = proportion of feed barley production without vomitoxin for CRD i qnit = total quantity of barley production for CRD i in year t, (from eq. 4) = predicted malting barley price, without vomitoxin, for CRD i, time t = predicted feed grain barley price, without vomitoxin, CRD i, time t We used the chain-type price index for gross domestic product to express all revenues in 1997 dollars and then totaled over the years 1993 through 1997 to obtain an estimate for each district of what barley revenues would have been during this period in the absence of vomitoxin. Using equation 10, we calculate the actual amount of revenue from barley production, AREVit , for CRD i, in time period t as: (10) The actual amount of production in each CRD in each year is denoted qait , while the actual market prices of malting barley and feed barley are represented by aPit , respectively. For each year between 1993 and aPit and 1997 and for each CRD, we calculated the proportion of barley sold to the malting market, abarit and the proportion sold as feed, abarit, using the same method as we did in equation 8. Table I.6 displays these weights for each CRD. Using equation 11, we calculated total changes in revenue from barley production for North Dakota due to scab and vomitoxin, D (11) This total represents the sum of the differences between the actual revenue, AREVit, and the predicted revenue in each year, in the absence of vomitoxin, NREVit , for each CRD i in each year, 1993 through 1997. We gathered data from several sources for our calculation of the revenue losses for North Dakota barley as a result of scab and vomitoxin. Our main source of data, the North Dakota Agricultural Statistics Service, provided information by CRD on planted and harvested barley acres, total barley production, malting barley prices, feed grain barley prices, and average barley yields for 1959 through 1997. We used weather data, average temperature and total precipitation by CRD from 1950 to 1997, supplied by USDA’s Economic Research Service as well as by North Dakota State University. North Dakota State University area crop extensionists and plant pathologists familiar with vomitoxin provided estimates of the fraction of yield shortfall attributed to vomitoxin for 1993 through 1997. Finally, we used data on U.S. barley production and corn prices from 1959 to 1997 from NASS in our estimation of malting barley premiums and barley feed prices. According to our analysis of the data from the Grain Inspection, Packers and Stockyards Administration’s (GIPSA) 1998 Sampling Variability Study, the Veratox test results from Neogen (the kit’s manufacturer) and Grand Forks Grain Inspection, Inc. (a GIPSA-authorized testing facility) differed significantly from each other and from GIPSA’s high pressure liquid chromatography (HPLC) results. Our analysis is not projectable to all Veratox test results in North Dakota because the data from GIPSA’s sampling study are not representative of all Veratox testing and barley sampling throughout the state. GIPSA’s sampling study was designed to determine how sampling size and method affect variability in vomitoxin test results. In this study, GIPSA (1) obtained six bulk barley samples from various elevators to study the effect of sampling size on variability and (2) sampled 10 trucks using different sampling methods to determine the effect of sampling method on variability. All samples were cleaned, ground, and subdivided into portions for testing by the Grand Forks Grain Inspection’s laboratory. Additional portions were provided to Neogen and Romer (another test kit manufacturer) for testing in their respective laboratories, and portions of the truck samples were tested by GIPSA in its Kansas City, Missouri, laboratory using the HPLC method. Neogen and Grand Forks Grain Inspection tested each subsample using Neogen’s Veratox test kit.Neogen performed two tests on each subsample it received, and Grand Forks Grain Inspection performed one test on each subsample it received. GIPSA did not intend to have the results from its barley sampling study represent the variability that exists with all barley sampling in North Dakota. It selected its test lots to ensure that vomitoxin concentration levels in the samples would fall within the Veratox test kit’s range of measurement ability—that is, from 0.5 parts per million (ppm) to 5 ppm. In addition, test data were from samples that differed in size and method of collection because GIPSA’s purpose was to assess the effect of these variables (size and sampling method) on vomitoxin test results. However, because GIPSA found that sample size and sampling method did not significantly alter the variability of test results, we concluded that the lack of uniformity in sample size and sampling method is not a significant limitation to our analysis. We analyzed 376 Veratox tests performed by Grand Forks Grain Inspection and 692 tests performed by Neogen. According to GIPSA officials, greater variability occurs when results from multiple test facilities are analyzed. Thus, since our analysis is based on data from only two testing facilities, our results may not be representative of the true amount of variability in vomitoxin test results conducted in North Dakota. According to GIPSA officials, the variability of test results differs depending on the concentration of vomitoxin in the barley sample. At their recommendation, we used GIPSA’s HPLC test result to represent the true concentration of vomitoxin in a sample and grouped the Veratox test results into four ranges. The first range contains results from barley samples with relatively low concentrations of vomitoxin—those with HPLC results of 0.7 ppm to 1 ppm. The last category contains results from samples with the highest concentrations of vomitoxin—those with HPLC results of 3.1 ppm to 4 ppm. Our analysis of Veratox test results from Neogen and the Grand Forks testing facility showed differences in the amount of vomitoxin measured at each location (see table II.1). That is, testing identical samples of barley at the testing facility and at the manufacturer resulted in different measurements of vomitoxin. Specifically, using the HPLC test results to represent the true concentration of vomitoxin, we found that at concentrations between 0.7 and 4 ppm, Neogen’s estimation of vomitoxin was, on average, higher than the testing facility’s. Given these differences, and the fact that small differences in the amount of vomitoxin measured can affect barley prices, we concluded that producers could have received different prices from each testing location if the test results had been the basis for a commercial sale. We also found in some cases that the results from Neogen and Grand Forks Grain Inspection differed, on average, from GIPSA’s HPLC reference method (see table II.2). Specifically, test results from the manufacturer were higher than test results from the HPLC reference method at three of the four concentration ranges we reviewed. For example, when HPLC results ranged from 0.7 to 1.0 ppm, we estimated that the average Neogen’s results would be between 1.3 to 1.5 ppm, which is higher than the average HPLC results. In addition, average Veratox results from the Grand Forks facility were lower than the reference method at two of four concentration ranges. For instance, when HPLC results ranged from 2.1 to 3.0 ppm, we estimated that the average test result from the testing facility would be between 1.7 to 2.0 ppm, which is lower than the average for the reference method. The fact that in one case the manufacturer’s test results were higher, on average, than the reference method’s results, while the testing facility’s results were lower, further demonstrates that variability can occur among testing facilities using the Veratox test kit. Wheat Pricing: Information on Transition to New Tests for Protein (GAO/RCED-95-28, Dec. 8, 1994). Midwest Grain Quality (GAO/RCED-94-66R, Nov. 1, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the effect of scab and vomitoxin on North Dakota barley crops, focusing on: (1) the financial impact from scab and vomitoxin on barley farmers; (2) the performance of vomitoxin test methods; and (3) short- and long-term actions that could help reduce the impact of scab and vomitoxin on North Dakota barley farmers. GAO noted that: (1) North Dakota barley farmers have experienced extensive revenue losses from scab and vomitoxin damage; (2) from 1993 through 1997, these farmers suffered estimated cumulative losses of about $200 million from scab and vomitoxin--equal to about 17 percent of the $1.2 billion in total barley revenues they received during this period; (3) while most of the revenue losses resulted from decreases in barley production, losses also resulted from severe price discounts; (4) maltsters and brewers, the traditional buyers of North Dakota's malting barley, have reacted to the scab and vomitoxin damage by purchasing less barley from North Dakota farmers and more from Canadian and other western U.S. sources; (5) three tests are generally used to measure vomitoxin concentrations in barley produced in North Dakota; (6) one is a field kit, called Veratox, which is commonly used by grain elevators and commercial testing facilities and is the test that most directly affects the prices farmers receive for their barley; (7) the Veratox test can produce results that vary at concentrations critical to pricing decisions; (8) testing experts attribute variations in test results to several sources, including the skill of the technician conducting the test; (9) they stress the importance of quality assurance measures and training to help reduce this variation; (10) the other two tests--high-pressure liquid chromatography and gas chromatography--are reference methods that are used primarily in research laboratories for such purposes as checking the performance of the Veratox kit; (11) according to analytical chemists and other testing experts, these tests provide accurate and consistent test results; (12) however, because of the complexity and the cost of the equipment for these two tests, they are not practical for use at commercial testing facilities and other locations that serve barley farmers; (13) short-term actions, such as rotating crops and spraying with fungicides, may help reduce scab and vomitoxin's impact under conditions of light infestation; (14) however, according to North Dakota agriculture experts, the benefits of these actions are negligible during periods of moderate to severe infestation; (15) from 1993 through 1997, several counties in the Red River Valley of North Dakota experienced moderate or severe scab and vomitoxin infestation; and (16) the longer-term action of developing more scab-resistant barley may also help reduce the disease's impact under conditions of light infestation.
Overall federal support for education includes not only federal funds, but also nonfederal funds associated with federal legislation. More than 30 departments or agencies administer federal education dollars, although the Department of Education administers the most, accounting for about 43 percent. Of the about $73 billion appropriated to support education, half supports elementary and secondary education. Overall, six program areas account for almost two-thirds of all budgeted education funding. Many departments and programs may target funds to the same target groups, such as poor children. Although some coordination takes place and some programs have been consolidated, much more needs to be done to coordinate the multiple education programs scattered throughout the federal government. NCES estimates federal support for education, excluding tax expenditures, as approximately $100.5 billion in fiscal year 1997. This figure is an estimate of the value of the assistance to the recipients—not the cost to the government. NCES describes this support as falling into two main categories: funds appropriated by the Congress (on-budget) and a combination of what NCES calls “off-budget” funds and nonfederal funds generated by federal legislation. Appropriated funds include items such as grants, federal matching funds, and the administration and subsidy costs for direct and guaranteed student loans. Off-budget funds are the portion of direct federal loans anticipated to be repaid. Nonfederal funds generated by federal legislation include nonfederal (generally state or local) funds provided to obtain federal matching funds and capital provided by private lenders for education loans. According to NCES, in fiscal year 1997, appropriated funds constituted approximately three- quarters of the total: $73.1 billion. To ensure that all Americans have equal access to educational opportunities, the federal government often targets its education funds to groups, such as poor children, that for various reasons have not had equal access to educational opportunities. The government may also target funds to ensure that all children have access to vital resources—such as well-trained teachers and technology. These concerns have helped disperse federal education programs to over 30 departments or agencies. The Department of Education spends the most, accounting for about 43 percent of appropriations or an estimated $31 billion in fiscal year 1997. (See fig. 2.) The Department of Health and Human Services (HHS) spends the next largest amount, with about 18 percent or an estimated $13 billion. Over half of this amount ($7.1 billion) funded research; another $4 billion funded the Head Start program. Other Departments with federal education dollars include the Departments of Agriculture, Labor, and Defense, with 13, 6, and 5 percent, respectively. The remaining 15 percent is spent by more than 30 additional departments or agencies. (A) T I (Educaon) Pll G (Educaon) (Lbo) HS) (Educaon) Elementary and secondary education programs account for half of all budgeted federal education dollars. (See fig. 5.) In addition, the federal government provides funds for postsecondary education (generally as grants and loan guarantees), research (through such Departments as HHS, Energy, and Defense, along with the National Science Foundation), and other activities such as rehabilitative services. Federal funds are generally targeted to specific groups. However, many education programs administered by separate agencies may target any single group. Although we have no comprehensive figures on the number of programs targeted to different groups, figure 6 shows the number of programs in various agencies targeted to three specific groups—young children, at-risk and delinquent youth, and teachers. Teachers (FY1993) (FY1992 & 1993) (FY1996) services. For example, in 1996, 47 federal programs provided substance abuse prevention, 20 provided substance abuse treatment, and 57 provided violence prevention. Thirteen federal departments and agencies administered these programs and received about $2.3 billion. In addition, the same department or agency administered many programs providing similar services. Justice, for example, had nine programs providing substance abuse prevention services to youth in 1996. Furthermore, many individual programs funded multiple services: about 63 percent of the programs funded four or more services each in 1996, according to our review. We also examined programs that provide teacher training. For this target group, multiple federal programs exist in a number of federal agencies. For example, the federal government funded at least 86 teacher training programs in fiscal year 1993 in nine federal agencies and offices. For the 42 programs for which data were available, Department officials reported that over $280 million was obligated in fiscal year 1993. Similarly, in fiscal years 1992 and 1993, the government funded over 90 early childhood programs in 11 federal agencies and 20 offices, according to our review. Our analysis showed that one disadvantaged child could have possibly been eligible for as many as 13 programs. Many programs, however, reported serving only a portion of their target population and maintained long waiting lists. Secretary of Education Riley testified recently before this Task Force that the Department of Education has made progress in both eliminating low-priority programs and consolidating similar programs. He noted, for example, that the reauthorization of the Individuals With Disabilities Education Act reduced the number of programs from 14 to 6. In addition, the Department has proposed eliminating or consolidating over 40 programs as part of the reauthorization of the Higher Education Act. difficulty for those trying to access the most appropriate services and funding sources. Federal programs that contribute to similar results should be closely coordinated to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. Uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. The large numbers of programs and agencies supporting education activities and target groups make management and evaluation information critical to the Congress and agency officials. Information about the federal education effort is needed by many different decisionmakers, for different reasons, at different times, and at different levels of detail. Much of that information, however, is not currently available. To efficiently and effectively operate, manage, and oversee programs and activities, agencies need reliable, timely program performance and cost information and the analytic capacity to use that information. For example, agencies need to have reliable data during their planning efforts to set realistic goals and later, as programs are being implemented, to gauge their progress toward reaching those goals. In addition, in combination with an agency’s performance measurement system, a strong program evaluation capacity is needed to provide feedback on how well an agency’s activities and programs contributed to reaching agency goals. Systematically evaluating a program’s implementation can also provide important information about the program’s success or lack thereof and suggest ways to improve it. nationwide. Finally, the Congress needs the ability to look across all programs designed to help a given target group to assess how the programs are working together and whether the overall federal effort is accomplishing a mission such as preventing substance abuse among youths. In addition, for specific oversight purposes, congressional decisionmakers sometimes want specific kinds of information. For example, this Task Force has indicated that two types of information would be particularly useful to its mission: knowing which federal education programs target which groups and knowing what characterizes successful programs. Some information is available about preK-12 programs that do not appear to be achieving the desired results and others that appear to be successful. Secretary Riley, for example, has testified that the Department will be doing more to disseminate the latest information on what works in education. Our clearest evidence about a lack of positive effect from federal expenditures comes from one of the largest programs: title I. Title I of the Elementary and Secondary Education Act is the largest federal elementary and secondary education grant program. It has received much attention recently because of an Education Department report showing that, overall, title I programs do not ultimately reduce the effect of poverty on a student’s achievement. For example, children in high-poverty schools began school academically behind their peers in low-poverty schools and could not close this gap as they progressed through school. In addition, when assessed according to high academic standards, most title I students failed to exhibit the reading and mathematics skills expected for their respective grade levels. The study concluded that students in high-poverty schools were the least able to demonstrate the expected levels of academic proficiency. strategies in several areas: school violence, substance abuse prevention,and school-to-work transition. In 1995, we also prepared an overview of successful and unsuccessful practices in schools and workplaces. Our reviews identified several important program characteristics: strong program leadership, linkages between the program and the community, and a clear and comprehensive approach. The Department of Education also has contracts for evaluating what works. For example, the Prospects study—in addition to providing the data on the overall limited effect of title I—analyzed the five high- performing, high-poverty schools in its sample of 400 schools. Although the number of schools is too small for conclusive generalizations, the study described the characteristics of these schools as “food for thought” for future research on successful programs. These schools had an experienced principal; low teacher and pupil turnover; an emphasis on schoolwide efforts that seek to raise the achievement of every student; a greater use of tracking by student ability; a balanced emphasis on remedial and higher order thinking in classroom involvement; and higher parent support and expectations than low-performing, high-poverty schools. Significant information gaps exist, however, about both programs and their outcomes. Currently, no central source of information exists about all the programs providing services to the same target groups among different agencies or about those providing a similar service to several target groups. Instead, we have had to conduct the specific analyses previously described for at-risk and delinquent youth, young children, and teachers—as well as others—to obtain this information. Moreover, in our evaluations of specific programs—some of which get billions of federal dollars each year—the most basic information is lacking. For example, our study of the Safe and Drug-Free Schools Program revealed that the program has no centralized information about what specific services the funds pay for—much less whether the money is being spent effectively. In our ongoing work on Head Start, we found that no list of Head Start classrooms and their locations existed. Urban and Suburban/Rural Special Strategies for Educating Disadvantaged Children: Findings and Policy Implications of a Longitudinal Study, Department of Education (Washington, D.C.: Apr. 1997). changes and changes in the population served. We have recommended that HHS include in its research plan an assessment of the impact of regular Head Start programs. Although the Department believes that clear evidence exists of the positive impacts of Head Start services, it does have plans to evaluate the feasibility of conducting such studies. More promising, but still incomplete, is the information available for Safe and Drug-Free Schools programs. Information on effectiveness and impact has not been collected, although overall evaluations of the Safe and Drug-Free Schools program have not been completed. However, Education’s evaluative activities focus on broader aspects of program implementation and not the effectiveness of all Safe and Drug-Free Schools programs nationwide. Moreover, the lack of uniform information requirements on program activities and effectiveness may create a problem for federal oversight. If (1) process information is critical for program, agency, and interagency management of federal elementary and secondary programs, and (2) outcome and impact information is needed to assess results and focus efforts on what works, why is information not readily available? The challenges to collecting that information include competing priorities—such as reducing paperwork and regulatory burden and promoting flexibility in program implementation—that restrict data collection and evaluation activities; the cost of data collection; the secondary role of education in many programs; the difficulty of obtaining impact evaluation information (under any circumstances); the special challenge to assessing overall effects on federal efforts involving multiple federal programs in multiple agencies; and until recently, a lack of focus on results and accountability. approving collections of information done by the federal government, whether through questions, surveys, or studies. This can limit the burden on state and local governments and others; however, it can also limit the amount of information collected by the Department of Education. Similarly, the challenge of balancing flexibility and accountability is apparent in efforts to provide certain federal education funds as block grants. Agencies face the challenge of balancing the flexibility block grants afford states to set priorities on the basis of local need with their own need to hold states accountable for achieving federal goals. For example, the Safe and Drug-Free Schools program allows a wide range of activities and permits states to define the information they collect on program activities and effectiveness. With no requirement that states use consistent measures, the Department faces a difficult challenge in assembling the triennial state reports to develop a nationwide picture of the program’s effectiveness. One promising strategy as an alternative to traditional block grants is the use of Performance Partnership Grants (PPG). Under PPGs, the states and the federal government negotiate an arrangement that identifies specific objectives and performance measures regarding outcomes and processes. This approach gives the states more control over their funding decisions, while encouraging them to accept greater accountability for results. Obtaining and analyzing information to manage and evaluate programs requires significant resources. For example, the Department of Education’s strategic plan cites the need to improve the quality of performance data on programs and operations and to promote the integration of federal programs with one another as well as with state and local programs. Towards this end, in fiscal year 1997, the Department of Education was appropriated about $400 million for educational research and improvement. Education estimates an additional $367 million was obligated by the Department for information technology for Department operations. In addition, evaluation research is costly. For example, in fiscal year 1993, the Department awarded 38 contracts totaling more than $20 million for evaluating elementary and secondary education programs. Contract amounts ranged from $38,000 to fund a program improvement conference to $6.8 million for implementing the chapter 1 longitudinal study (Prospects). But this accounted for only 1 year of this multiyear study: this longitudinal study to assess the impact of significant participation in title I programs on student and young adult outcomes cost about $25 million over a 4-year period. The median cost for an evaluation contract was about $180,000 in fiscal year 1993. In our testimony last spring on challenges facing the Department of Education, we noted that the Department needed more information to determine how its programs are working and that additional departmental resources may be needed to manage funds and provide information and technical assistance. For example, title I is intended to promote access to and equity in education for low-income students. The Congress modified the program in 1994, strengthening its accountability provisions and encouraging the concentration of funds to serve more disadvantaged children. At this time, however, the Department does not have the information it needs to determine whether the funding is being targeted as intended. Although the Department has asked for $10 million in its fiscal year 1998 budget request to evaluate the impact of title I, it has only just begun a small study of selected school districts to examine targeting to identify any necessary mid-course modifications. The ultimate impact of the 1994 program modifications could be diminished if the funding changes are not implemented as intended. Many federal programs involving education have other primary purposes. For example, the Department of Agriculture’s child nutrition program provides school breakfast and school lunch programs. The Head Start program also emphasizes health and nutrition as well as parenting skills; cognitive development is only one of six program goals. In addition, Safe and Drug-Free Schools Act money can be used to provide comprehensive health education, whose major goals and objectives are broader than just drug and violence prevention. Good evaluative information about program effects is difficult to obtain. Each of the tasks involved—measuring outcomes, ensuring the consistency and quality of data collected at various sites, establishing the causal connection between outcomes and program activities, and distinguishing the influence of extraneous factors—raises formidable technical or logistical problems. Thus, evaluating program impact generally requires a planned study and, often, considerable time and expense. Program features affect the relative difficulty of getting reliable impact information. The more varied the program activities and the less direct the connection between the provider and the federal agency, the greater the difficulty of getting comparable, reliable data on clients and services. For example, a federal agency whose own employees deliver a specified service can probably obtain impact data more easily than one that administers grants that states then pass on to several local entities to be used different ways. Also, due to the absence of contrasting comparison groups, it is extremely difficult to estimate the impact of a long-standing program that covers all eligible participants. The sheer number of departments and agencies that spend federal education dollars makes it hard to aggregate existing information among federal programs for certain issues or target groups. Each program may have its own measures on the federal, state, and local levels. Even for a single program, each state may use different measures (as mentioned earlier regarding the Safe and Drug-Free Schools and Communities Act programs), creating difficult challenges to developing a nationwide picture of the program’s effectiveness. Yet this is just 1 of the 127 programs administered by 15 agencies that target at-risk and delinquent youth. If the Congress wanted to know the overall effectiveness of the federal effort in helping at-risk and delinquent youth, the task would be even more daunting than that the Department of Education faces in developing a nationwide picture of one flexibly administered program. Federally funded programs have historically placed a low priority on results and accountability. Therefore, until recently, the statutory framework has not been in place to bring a more disciplined approach to federal management and to provide the Congress and agency decisionmakers with vital information for assessing the performance and costs of federal programs. In recent years, however, governments around the world, including ours, have faced a citizenry that is demanding that governments become more effective and less costly. These two demands are driving the move to a performance-based approach to managing public-sector organizations. GPRA is the centerpiece of a statutory framework provided by recent legislation to bring needed discipline to federal agencies’ management activities. Other elements are the expanded Chief Financial Officers Act, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996. These laws each responded to a need for accurate, reliable information for executive branch and congressional decision-making. In combination, they provide a framework for developing (1) fully integrated information about an agency’s mission and strategic priorities, (2) performance data for evaluating the achievement of these goals, (3) the relationship of information technology investments to meeting performance goals, and (4) accurate and audited financial information about the costs of meeting the goals. GPRA requires that agencies clearly define their missions, establish long-term strategic goals as well as annual goals linked to them, measure their performance according to the goals they have set, and report on their progress. In addition to ongoing performance monitoring, agencies are also expected to perform discrete evaluations of their programs and to use information obtained from these evaluations to improve their programs. Agencies are also expected to closely coordinate with other federal agencies whose programs contribute to similar results to ensure that goals are consistent and, as appropriate, that program efforts are mutually reinforcing. Each agency was required to submit to OMB and the Congress a strategic plan explaining its mission, long-term goals, and strategies for meeting these goals by September 30, 1997, and the Department of Education did so. needed to meet any unmet goals. In addition, by early 1998, OMB must submit to the Congress governmentwide performance plans based on agencies’ plans as part of the president’s fiscal 1999 budget. For federal education programs, this shift to a focus on results can help inform decisionmakers about effective program models and the actual activities and characteristics of individual federal programs. GPRA provides an incentive for agency and program personnel to systematically assess their programs and identify and adapt successful practices of similar programs. The act also provides an early warning system for identifying goals and objectives that are not being met so that agency and program staff can replace ineffective practices with effective ones. The act’s emphasis on coordination among similar programs and linking results to funding also provides a way to better understand the overall effect of federal activities and to identify programs that might be abolished, expanded, or consolidated with others. If agencies and OMB use the annual planning process to highlight crosscutting program issues, the individual agency performance plans and the governmentwide performance plan should provide the Congress with the information needed to identify agencies and programs addressing similar missions. Once these programs are identified, the Congress can consider the associated policy, management, and performance implications of crosscutting program issues. This information should also help identify the performance and cost consequences of program fragmentation and the implications of alternative policy and service delivery options. These options, in turn, can lead to decisions about department and agency missions and allocating resources among those missions. Achieving the full potential of GPRA is a particularly difficult challenge because of the multiple programs and many departments involved in the federal effort to improve public K-12 education. Meanwhile, this challenge—combined with the current limited data available about the programs and their effectiveness—is precisely why GPRA is needed. It is also why we believe it holds promise to help improve the information available to decisionmakers and, thus, the federal effort in this important area. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Task Force may have. Managing for Results: Building on Agencies’ Strategic Plans to Improve Federal Management (GAO/T-GGD/AIMD-98-29, Oct. 30, 1997). Safe and Drug-Free Schools: Balancing Accountability With State and Local Flexibility (GAO/HEHS-98-3, Oct. 10, 1997). Education Programs: Information on Major Preschool, Elementary, and Secondary Education Programs (GAO/HEHS-97-210R, Sept. 15, 1997). Education Programs: Information on Major Postsecondary Education, School-to-Work, and Youth Employment Programs (GAO/HEHS-97-212R, Sept. 15, 1997). At-Risk and Delinquent Youth: Fiscal Year 1996 Programs (GAO/HEHS-97-211R, Sept. 2, 1997). Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap (GAO/AIMD-97-146, Aug. 29, 1997). Substance Abuse and Violence Prevention: Multiple Youth Programs Raise Questions of Efficiency and Effectiveness (GAO/T-HEHS-97-166, June 24, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Head Start: Research Provides Little Information on Impact of Current Program (GAO/HEHS-97-59, Apr. 15, 1997). Department of Education: Challenges in Promoting Access and Excellence in Education (GAO/T-HEHS-97-99, Mar. 20, 1997). Schools and Workplaces: An Overview of Successful and Unsuccessful Practices (GAO/PEMD-95-28, Aug. 31, 1995). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). Multiple Teacher Training Programs: Information on Budgets, Services, and Target Groups (GAO/HEHS-95-71FS, Feb. 22, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) the amount and complexity of federal support for education; (2) additional planning, implementation, and evaluative information needed by agencies and the Congress on federal education programs; and (3) some of the challenges of obtaining more and better information. GAO noted that: (1) billions in federal education dollars are distributed through hundreds of programs and more than 30 agencies; (2) agencies and the Congress need information to plan, implement, and evaluate these programs; (3) to gauge and ensure the success of these programs, the Congress and agencies need several kinds of information; (4) they need to know which specific program approaches or models are most effective, the circumstances in which they are effective, and if the individual programs are working nationwide; (5) they also need to be able to look across all programs that are designed to help a given target group to see if individual programs are working efficiently together and whether the federal effort is working effectively overall; (6) GAO believes a close examination of these multiple education programs is needed; (7) the current situation has created the potential for inefficient service and reduced overall effectiveness; (8) basic information about programs and program results is lacking and there are many challenges in obtaining this important information; and (9) the Government Performance and Results Act of 1993 (GPRA) holds promise as a tool to help agencies manage for results, coordinate their efforts with other agencies, and obtain the information they need to plan and implement programs and evaluate program results.
According to 2013 Census estimates, more than 640,000 American Indians and Alaska Natives reside on tribal lands. The federal government has recognized many American Indian tribes and Alaska Native Villages as distinct, independent political communities with inherent sovereignty. Tribal lands vary in size, demographics, and location. The smallest are less than one square mile, and the largest, the Navajo Nation, is more than 24,000 square miles. Most tribal lands are in remote, rural locations, but some are located near urban areas. There are more than 300 Indian tribes in the continental United States and more than 200 Alaska Native Villages that are federally recognized. The tribal government has the option of forming entities that manage tribal affairs including schools, housing, health, and economic enterprises. Additionally, the Alaska Native Claims Settlement Act of 1971 directed the establishment of 12 regional corporations representing geographic regions of the entire state to, among other things, resolve long-standing aboriginal land claims and foster economic development in Alaska. These corporations distribute land and monetary benefits to Alaska Natives to provide a fair and just settlement of aboriginal land claims in Alaska. The regional corporations have corresponding nonprofit organizations that provide social services to the villages. Figure 1 shows tribal lands in the United States according to the 2010 Census, and the Alaska Native regions. Native Americans are among the most economically distressed groups in the United States. According to the Census’ 2014 American Community Survey (ACS), about 28.3 percent of Native Americans live in households with incomes below the federal poverty level—compared to 15.5 percent for the U.S. population as a whole. In addition, ACS data shows that residents of tribal lands often lack basic infrastructure, such as water and sewer systems, and telecommunications services. We reported in 2006 that tribal officials and government agencies said that conditions on tribal lands have made successful economic development more difficult than in other parts of the country because the high cost and small markets associated with investment on tribal lands deter business investment. We found that this was particularly true for businesses such as Internet providers that must build out infrastructure to serve tribal lands. Customers generally subscribe to Internet through a fixed or mobile device. In-home fixed Internet plans are often sold as a monthly subscription by cable television or telephone companies. Consumers can connect a variety of devices to in-home fixed networks through a wired or wireless connection. Service is provided via different types of technology. Service from cable television companies is generally provided through the same coaxial cables that deliver television programming. Service from telephone companies is generally provided through traditional copper telephone lines—commonly referred to as digital subscriber line (DSL) service—or fiber-optic lines, which convert electrical signals carrying data into light and send the light through glass fibers. In areas where none of these wired connections exist, some carriers offer fixed wireless devices for home use. Advances in technology, such as the use of fiber optics and new wireless technologies have allowed providers to offer increasingly faster high-speed Internet that supports new services and applications such as streaming video. Only these faster speeds attained through fiber and other new technologies are considered high-speed Internet. In 2010, FCC stated that every household and business in America should have access to affordable advanced telecommunication service with a speed of at least 4 Mbps download and at least 1 Mbps upload and that this target should be re-set every four years. In January 2015, FCC adopted a speed benchmark at download speeds of at least 25 Mbps and upload speeds of at least 3 Mbps. Generally, only cable or fiber can deliver this level of broadband service to consumers’ homes. Mobile service is provided through cell tower coverage with data transmitted over the radio spectrum. Traditionally, mobile service providers sold access to the Internet as an option to mobile telephone service plans. A number of devices may connect to mobile high-speed networks, such as smart phones, tablets, and mobile devices that enable laptops to connect to a wireless service. The federal government has recognized the difficulties of providing services on tribal lands, and has maintained several ongoing programs to increase Internet availability and access in unserved areas. The USDA’s Rural Utilities Service (RUS) and FCC are responsible for several programs designed to improve the nation’s telecommunications infrastructure. RUS’s programs focus on rural telecommunications development, while FCC’s programs under the Universal Service Fund (USF) focus on providing support for areas where the cost of providing services is high, as well as for low-income consumers, schools, libraries, and rural health care facilities. All of these programs, which are discussed in more detail later in this report, seek to expand high-speed Internet access and can benefit tribal lands and their populations. The American Recovery and Reinvestment Act of 2009 (Recovery Act) authorized other, one-time federal programs such as the Broadband Initiatives Program and the Broadband Technologies Opportunities Program to expand high-speed Internet access in unserved areas, including on tribal lands. The Recovery Act also directed FCC to develop a national broadband plan to ensure every American had access to high- speed Internet service. In March 2010, FCC issued the National Broadband Plan that included a centralized vision for achieving affordability and maximizing use of high-speed Internet to advance community development, health care delivery, education, job creation, and other national purposes. With regard to tribal lands, the Plan recommended that the Commission increase its commitment to government-to-government consultation with tribal leaders and consider increasing tribal representation in telecommunications planning. In July 2010, FCC announced the creation of the Office of Native Affairs and Policy. The office was tasked to promote the deployment and adoption of communication services and technologies throughout tribal lands and native communities, by, among other things, ensuring the recommended consultation with tribal governments and native organizations. Officials from the Office of Native Affairs and Policy said that the office has helped to facilitate, draft, analyze, and advise on policy issues affecting Native communities as part of FCC’s decision-making process. Tribal officials we interviewed said they place a high priority on institutional and personal Internet access because of the numerous benefits, including the following. Economic Development: Officials from most tribes said high-speed Internet is essential for economic development such as finding employment or establishing online businesses. FCC also found that community access to Internet services is critical in facilitating job placement, career advancement, and other uses that help to stimulate economic activity. For example, a resident of an Alaska Native Village operates a tour company and stated that the booking, communication, and advertising of the business are completely reliant on a satellite Internet connection. However, the unreliable Internet service quality made booking customers and working with online tourism companies challenging. Education: Officials from many tribes stated that high-speed Internet access at schools supports educational success. For example, access can allow students to conduct online testing or to watch online lectures, according to officials from two tribes we interviewed. In addition, officials from some tribes said that students who had access at school, but not at home were disadvantaged compared to their peers who had access at home. Health: About half of the tribes said that high-speed Internet access to support telemedicine was important to the tribe, particularly in rural or remote areas. Officials from all of the tribes we interviewed also said that Internet service existed on at least some of their lands at varying connection speeds, ranging from less than 1 Mbps to over 25 Mbps. Some of the tribes we interviewed had at least some fiber optic high-speed Internet connections while the others had slower copper lines, only mobile service, or only satellite service. Moreover, while many of the tribal lands where we held interviews had some level of mobile Internet service, only a few tribal lands had 4G mobile high-speed Internet services and a few others had no mobile service. Further, officials from about half of the tribes we interviewed described important limitations to their Internet services, including higher than usual costs, small data allocations, slow download speeds, and unreliable connections. For example, officials from the Quileute tribe said that connection problems caused by heavily congested networks forced them to upload the required reports to federal grant websites after regular business hours. The interrelated barriers of rugged terrain and rural location characteristic of many tribal lands, as well as tribal members’ limited ability to pay for high-speed Internet service were tribes’ and private providers’ most commonly cited impediments to improvements in high-speed Internet service. FCC’s Office of Native Affairs and Policy reported in 2012 that rural, remote, and rugged terrain increase the cost of installing, maintaining, and upgrading Internet infrastructure. It also reported that affordability of these services among tribal members is affected by often endemic levels of poverty, as discussed later in this report. Internet providers said that these barriers can deter private investment in infrastructure needed to connect remote towns and villages to a service provider’s core network—known as the middle mile. Middle-mile infrastructure may include burying fiber optic or copper cables, stringing cable on existing poles, or erecting towers for wireless microwave links, which relay wireless Internet connections from tower to tower through radio spectrum. Figure 2, below, illustrates some of the options for middle-mile Internet service delivery deployment infrastructure. Many tribal officials and all six providers we interviewed listed rugged terrain and the rural location of many tribal lands as challenges to deploying this infrastructure to tribal lands. Tribal lands located far from urban areas may not have middle-mile infrastructure necessary for high- speed Internet deployment to their lands. More specifically, interviewees discussed the remoteness or distance from existing high-speed Internet networks in urban and suburban centers; the vastness of reservation lands; low population density; rugged terrain characteristics such as hills, forests, mesas, and rocks; and, in some places, a lack of basic services such as roads, addresses, and commercial power. Figure 3 from the remote village of Beaver, Alaska, which is not connected to a road network and is only accessible by plane, illustrates some of these characteristics. The building shown is connected only via satellite, because there is no fixed or wireless Internet service in Beaver. Residents of Beaver told us that satellite Internet is a poor substitute for land-based middle-mile infrastructure because it is slower, less reliable, includes restrictive caps on data usage, and suffers from regular blackout periods. The terrain and lack of basic services tend to increase the cost of building and maintaining the middle-mile infrastructure, compared to costs in urban settings. For example, the Lac du Flambeau and Menominee tribes in Wisconsin live on reservations with dense, tall forests, and microwave towers must be tall enough—sometimes as high as 250 feet—in order to transmit the high-speed Internet signal above the tree canopy, according to tribal officials. Additionally, Alaska’s permafrost and seasonal thaw makes it difficult to lay fiber optic cables, according to service provider officials. Finally, one provider in the Southwest United States said it has only been able to deploy limited service on the Navajo Nation land because it spans more than 24,000 square miles, and many of the remote areas are not served by commercial power. The limited financial resources available to tribal households were also cited by tribal officials and providers we interviewed as a barrier to high- speed Internet access. Of the 21 tribes we interviewed, many reported poverty and affordability as drivers of low subscribership to existing Internet services or as a barrier to broadening the availability of services. Poverty rates among the tribes we interviewed varied, but many were well above the 2014 national average of 15.5 percent, as is common for tribal lands. Figure 4 below shows the poverty rates for the 21 tribes we interviewed. For example, the Menominee reservation and Pueblo of Laguna each have poverty rates of 35-36 percent according to Census’ 2013 American Community Survey, which collects demographic, social, economic and housing data. For the Rosebud Sioux, the poverty rate is 47 percent. Officials from the Menominee tribe said tribal households still cannot afford Internet service. For the Pueblo of Laguna, tribal officials reported that residents often choose mobile Internet options because they cannot afford separate phone and Internet service. Officials from the Confederated Tribes of Salish and Kootenai said that when tribal households can afford Internet, they can afford only the slowest download speeds available. Some tribes we interviewed said they are served by a single provider, and officials from five of those tribes reported their provider charging what they described as high prices for limited service. In Bethel, Napaskiak, and Oscarville, Alaska, residents reported that while they had Internet access through a regional service provider, this provider’s services had low data allocations that subscribers routinely exceeded and paid penalties as a result. Moreover, officials from Bethel said that applicants for tribal housing assistance with outstanding debt of more than five percent of their income from unpaid mobile Internet bills were ineligible for this assistance. Also according to these officials, when an Internet customer had an outstanding bill, the local provider would shut off their phone. The customer had to pay back this outstanding balance before they could get their phone turned back on and qualify for housing assistance. In the housing application round for Bethel that occurred just before our June 2015 visit, 13 of 38 applicants were rejected due to their delinquent Internet bills, according to data provided by the tribe. Tribal officials said that this was typical, and that it can take up to a year to pay off these bills due to the limited income opportunities in the region. Two of the providers we interviewed discussed non-payment among tribal households as a disincentive to Internet service provision. One provider said that the customers it serves on tribal lands had non-payment rates double that of other customer groups, and that these rates often follow seasonal employment patterns. Officials from another Internet provider said that high poverty had led tribal customers’ accounts to fall into delinquency and be subsequently disconnected from service. According to some of the tribes we interviewed, limited finances led many tribal households to opt out of purchasing service or not being able to keep up payments for service they did purchase. About half of the tribes we interviewed told us that a lack of tribal members with sufficient bureaucratic and technical expertise is a barrier to increasing high-speed Internet access on tribal lands. Tribal officials said that tribal members do not always have the bureaucratic expertise required to apply for federal funds, which can lead to mistakes or the need to hire consultants. Officials of the Ute tribe, for example, described submitting application paperwork for federal funding several times before being accepted because of multiple federal officials asking for different edits. Some tribes reported spending resources on outside consultants to handle the application process. For example, the Mississippi Choctaw told us they hired a full-time grant writer to manage their E-rate application when they had difficulty applying for E-rate on their own. The consultant confirmed that there is a steep learning curve to the process and not all tribes would have the money or time to have a member overcome the learning curve while fulfilling other tribal responsibilities. Further, according to officials, Unalakleet’s regional school district contracts out the E-rate application process to a consultant for $22,000 annually. The district receives about $5 million in E-rate funding annually to subsidize its schools’ high-speed Internet connection. Additionally, Lac du Flambeau officials said they spent funds on lawyers, consultants, and engineers who they had hired to assist them in applying for federal funding. Lack of technical expertise also affects tribes’ ability to interact with private-sector Internet providers. For the seven tribes we interviewed that either had a tribally owned provider or were in the process of establishing one, three of them said that the lack of expertise in the tribe was a challenge to establishing a tribally-owned telecommunications provider for high-speed Internet deployment. In addition, Salish and Kootenai officials recounted a meeting with several providers as part of a federal assistance application requirement. The officials said that none of the tribal officials understood the providers’ plans and as a result were not able to represent the tribe’s best interests. Further, officials from the Pueblo of Laguna highlighted that they will need ongoing investment in employee training to ensure that their knowledge keeps pace with technological developments and infrastructure upgrades. The National Broadband Plan recognized the challenges of administrative and technical capacity and recommended that FCC and Congress support technical training and capacity development on tribal lands, such as by considering additional funding for tribal leaders to participate in FCC training at no cost. In the early 2000s, FCC held a number of Indian Telecommunications Initiatives Regional Workshops and Roundtables. In fiscal year 2012, the Office of Native Affairs and Policy consulted with about 200 tribal nations, many during six separate one- to three-day telecommunications training and consultation sessions on tribal lands. These included the Native Learning Labs, where attendees could, for example, learn about data the FCC has available on spectrum licensing and USF programs, among other things. Recently, the Office held seven training workshops in fiscal year 2014 and fiscal year 2015, and plans to offer more in fiscal year 2016. The goal of this new series of sessions is to provide tribal officials with information about funding opportunities and policy changes with respect to high-speed Internet, USF programs, and spectrum issues. FCC and USDA implement mutually supportive interrelated high-speed Internet access programs that offer assistance to tribes and the providers that serve tribal lands. FCC’s and USDA’s programs have similar goals to increase access to Internet on tribal lands and they both offer funding to either tribal entities or service providers to achieve this goal of increased access. Further, both FCC and USDA programs have eligibility requirements based on the need of an area as well as deployment requirements. Tribes sometimes qualify for benefits from more than one of these programs, either directly or through private-sector Internet providers. Tribal officials we interviewed said that both FCC’s and USDA’s programs were important for the expansion of high-speed Internet service on their lands. The FCC has programs that provide subsidies or discounts to improve telecommunications services, including services on tribal lands. These programs have a longstanding goal of making communications services available “so far as possible to all the people of the United States.” The Telecommunications Act of 1996 extended the scope of federal universal service to support and make advanced telecommunications services available to eligible public and nonprofit elementary and secondary schools, libraries, and nonprofit rural health care providers at discounted rates. Today, the goals of these programs include increasing access to Internet service for all consumers at reasonable and affordable rates. Three universal service programs subsidize telecommunications carriers that provide high-speed Internet and other telecommunications services to areas that include tribal lands: The Connect America Fund (CAF)—formerly the High Cost Program—was established to extend high-speed Internet service to those areas that lack service, while preserving voice service. CAF provides subsidies to Internet providers to supplement their operating costs for providing high-speed Internet in unserved or high-cost areas. In total, the High Cost and Connect America Fund distributed about $20 billion in subsidies to providers between 2010 and 2014, portions of which went to providers that serve tribal lands. The USF Schools and Library Support Program, also known as E- rate, provides discounts to eligible schools and libraries on telecommunications services, Internet access, and internal connections. In total, the E-rate program provided about $13 billion in discounts to schools and libraries between 2010 and 2014, portions of which went to schools and libraries on tribal lands. The Healthcare Connect Fund provides assistance to ensure eligible rural health care providers have access to high-speed Internet services and supports the formation of regional health care provider networks. Although the Healthcare Connect Fund does not specifically target tribal institutions, assistance may be provided to a service provider (or group of providers) that serve tribal lands. The Healthcare Connect Fund started in 2014 and provided about $52 million to healthcare facilities in fiscal year 2014, a portion of which went to tribal lands. For example, tribal officials said that the Healthcare Connect Fund helped fund telemedicine carts that access high-speed Internet connections to send patient data including pictures and X-rays to regional hospitals to reduce costs, (see figure 5). In addition to general programs that include tribal beneficiaries, FCC has also implemented efforts designed specifically to address concerns of Tribal and Native Communities. For example, in 2000 FCC began its Tribal Lands Bidding Credit Program to provide incentives to wireless providers to deploy wireless services on tribal lands. FCC is authorized to auction radiofrequency spectrum to be used for wireless services in the United States. Under the Tribal Lands Bidding Credit program, FCC grants bidding credits to a winning bidder in a spectrum auction if the bidder deploys facilities and provides telecommunications services to qualifying tribal lands. In total, the program has awarded credits to 53 licensees that have pledged to deploy facilities and provide telecommunications services on 13 tribal lands. More recently, in 2012 when FCC made reforms to universal service, it created the Mobility Fund under the Connect America Fund. Phase I of the Mobility Fund, which began in fiscal year 2012, provided $300 million of one-time support to extend the availability of wireless voice and high- speed Internet networks in areas where they were not available, including tribal lands. It also established a separate, one-time Tribal Mobility fund, which awarded $50 million in fiscal year 2014. Phase II of the Mobility fund will have a budget of $500 million, of which $100 million is designated as support for tribal lands. FCC has not set a date for the awarding of these funds. According to some tribes and five of the six service providers we interviewed, FCC’s USF subsidies have helped expand high-speed Internet throughout tribal lands for tribal institutions, such as schools, libraries, and clinics. Further, building out the Internet service delivery infrastructure for schools and clinics with USF support allows service providers to begin offering household access in remote areas as well, according to two providers we interviewed. For example, one service provider said that Internet service would not exist in the majority of Alaska without USF’s E-rate and Healthcare Connect Fund programs. FCC’s programs made Internet service possible in the remote villages of Napaskiak and Oscarville, Alaska. These villages were only accessible by boat or plane and did not have roads or running water, but they did have Internet. According to officials, the best connections in both villages were in the USF supported schools and clinics, and officials from the regional school district serving the two villages said students rely on the high- speed Internet networks and the schools hope to use e-books since flying textbooks to rural Alaska is expensive. Figure 6 depicts the microwave tower in Oscarville, Alaska, which completes the middle-mile wireless signal that it conveys to the school, clinic, and households in the village. RUS programs also provide support to improve rural telecommunications infrastructure—including high-speed Internet—through grants, loans, and loan guarantees. RUS programs seek to extend high-speed Internet access in rural communities, where it is least likely to be commercially available, but where it can improve the quality of life, education, healthcare, and community development. Eligible participants in RUS programs can include federally recognized tribes. Assistance from RUS can be used to build out new or improve existing telecommunication infrastructure in rural areas, which include many of the tribal lands, through two programs: The Distance Learning and Telemedicine program provides grants to rural communities to acquire technologies that use the Internet to link educational and medical professionals with people living in rural areas. In total, the Distance Learning and Telemedicine program provided about $128 million in grants and loans between 2010 and 2014, almost $3 million of which went to tribal lands. The Community Connect Program provides grants to rural communities to provide high-speed Internet service to unserved areas. In total, the Community Connect Program provided about $53 million in grants between 2010 and 2014, almost $3 million of which went to tribal lands. Officials from some tribes, three of which operate tribally owned service providers, said that USDA RUS grant and loan programs or RUS stimulus funding efforts through the Recovery Act were important in the expansion of Internet throughout tribal lands for tribal institutions. In addition, officials from one Internet provider in Alaska said that RUS funding was important for allowing them to build high-speed Internet infrastructure in rural areas, including Native Villages. FCC’s and USDA’s programs that promote high-speed Internet access in tribal lands are interrelated in that they all seek to increase this access in areas that include tribal lands. For example, FCC’s Health Care Connect and USDA’s Distance Learning and Telemedicine programs both seek to assist clinics connect to the Internet, including those on tribal lands. These programs are not always well coordinated. Our body of work has shown that interagency coordination can help agencies with interrelated programs ensure efficient use of resources and effective programs. Agencies can enhance and sustain their coordinated efforts by engaging in key practices, such as establishing compatible policies and procedures through official agreements. Agencies can also develop means to operate across agency boundaries, including leveraging resources across agencies for joint activities such as training and outreach. One area lacking coordination between FCC and USDA is their outreach and technical assistance efforts when planning visits to tribes or conference attendance. Synchronizing these activities could be a resource-saving mechanism. However, both FCC and USDA independently conduct outreach and training efforts for related programs promoting Internet access. For example, FCC was authorized to spend up to $300,000 on tribal consultation and training in fiscal year 2015. While FCC officials said they invite USDA officials to FCC training workshops and are sometimes invited to USDA training workshops, they said that they do not coordinate to develop joint outreach or training events. This could result in an inefficient use of limited federal resources and missed opportunities for resource leveraging between the two agencies and cost-savings to the tribes attending training events. For example, while USDA held a training event in Washington State in fiscal year 2015, FCC hosted a training event in Oregon the same year. The two agencies could have planned a joint training event in the Pacific Northwest Region and each contributed towards the costs of the event while reducing the cost burdens for tribes, who would not have had to travel twice or choose between the two training events given limited budgets. Officials from one tribe said that multiple federal programs offering similar grants were confusing and that a federal one-stop-shop for outreach and training would help them better target the right programs for their situation. Officials from a different tribe said that the tribe benefits from FCC programs but not USDA programs, in part, because tribal officials did not have a strong understanding of the USDA programs that might benefit their community’s Internet access. Better coordination on conferences, as feasible, could help FCC and USDA reach a broader audience and increase the value of their outreach to tribes. In 2006, we found that the rate of Internet access on tribal lands was unknown because no federal survey had been designed to capture this information. We recommended that additional data be identified to help assess progress towards providing access to telecommunications, including high-speed Internet, for Native Americans living on tribal lands. Since then, the federal government has started collecting data on Internet availability and access on tribal lands. FCC has made an important distinction between Internet availability and Internet adoption. Availability relates to the presence of Internet Service in an area, and adoption relates to people in the area subscribing to the Internet service. FCC’s strategic plan for fiscal years 2015-2018 includes a strategic goal related to Internet availability and ensuring that “all Americans can take advantage of the service…without artificial impediments.” This goal has a strategic objective to “maximize the availability of broadband Internet to all—including low income Americans, those in rural areas and tribal lands, and individuals with disabilities.” As we reported in June 2015, this represented a change from the previous strategic plan, which included a strategic objective to “maximize” broadband adoption with a related performance goal to “support and facilitate” broadband adoption. Noting that the change in the strategic plan from adoption to availability made it unclear as to which was the priority, we recommended that FCC revise its strategic plan to more clearly state if addressing adoption is a major function of the Commission and, if so, specify what outcomes they intend to achieve. In response, FCC commented that broadband adoption remains a significant focus. However, as of December 2015, FCC has not identified the performance goals and measures it intends to achieve for broadband availability or adoption. Agency performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly towards pre- established goals. Performance measurement allows organizations to track progress in achieving their goals and provides information to identify gaps in program performance and plan any needed improvements. The GPRA Modernization Act of 2010 requires annual performance plans to include performance measures to show the progress the agency is making in achieving its goals. Further, we have identified best practices in articulating goals that include (among others): Showing baseline and trend data for past performance and Identifying projected target levels for performance for multi-year goals. The National Broadband Map is the most detailed source of Internet availability on tribal lands and the reliability of the data is improving. Providers are updating information and incorporating GPS information to correct inaccuracies, and FCC has a formal process for the public to report complaints. Map data are widely used by FCC currently to describe the availability of broadband nationwide. For example, FCC uses data gathered for the National Broadband Map in its annual Broadband Progress report provided to Congress as required by the Telecommunications Act of 1996. Data supporting the National Broadband Map could be used, for example, to establish a baseline of high-speed Internet availability nationwide and on tribal lands. Making high-speed Internet, including broadband Internet, available to all Americans is FCC’s stated goal, but FCC has not set goals to demonstrate or measure progress toward achieving it. While the National Broadband Map does have some weaknesses, it provides the best current tool for setting goals and measuring progress toward increasing the availability of high-speed Internet on tribal lands. Although Census is gathering baseline information on household internet adoption, and the National Broadband Map provides data on high-speed Internet availability across the country, FCC lacks information to measure the outcomes of its E-rate program at tribal schools and libraries. FCC’s E-rate program provides assistance to schools, school districts, and libraries to obtain telecommunications technology, including high-speed Internet. E-rate does not specifically target tribal schools and libraries, although some are eligible and receive benefits. Since 2010, E-rate has committed more than $13 billion in service provider customer fees to schools and libraries, and according to data provided by FCC, and at least $1 billion of that amount supports tribal institutions. FCC’s strategic plan sets forth an objective for the E-rate program to ensure that all schools and libraries have affordable access to modern broadband technologies. Communicating what an agency intends to achieve and its programs for doing so are fundamental aims of performance management. Under the GPRA Modernization Act of 2010, an agency is expected to communicate the outcomes of its efforts. Specifically the act requires the agency to have outcome oriented goals for major functions and operations and an annual performance plan consistent with that strategic plan with measurable, quantifiable performance goals. Similarly, Federal Internal Control Standards state that operational and financial data are needed to determine whether or not an agency is meeting its strategic and annual performance goals. However, FCC has not set any quantifiable goals and performance measures for its E-rate efforts to extend high-speed Internet in schools and libraries nationwide, or more specific performance measures for the same institutions on tribal lands. FCC has noted the additional difficulties that tribal entities have in securing high-speed Internet on their lands, and directed efforts to address these difficulties in the E-rate Modernization Orders in 2014. According to federal internal control standards, management should ensure there are adequate means of obtaining information from external stakeholders that may have a significant impact on the agency meeting its goals. FCC collects information on E-rate recipients nationwide through questions on its application for E-rate assistance, including the type of organization requesting funding and the types of institutions served, such as public, private, tribal, or Head Start, among others. Several different types of institutions on tribal lands can qualify for E-rate funding, including schools operated by the tribe or Bureau of Indian Education, private schools operating on a reservation, as well as public school districts that serve the reservation. FCC’s E-rate application provides for applicants to self-identify whether recipients of service on the application are tribal, but in this instance, provides no definition of "tribal." We found that not all schools and libraries on tribal lands identify themselves as such during the application process. FCC provided us with information on E-rate recipients between 2010 and 2014 that self-identified as tribal, and the amounts committed to those recipients. These data may understate the amount of funds supporting schools on tribal lands. Specifically, we identified more than 60 additional school districts, private schools, and public libraries on the lands of the 21 tribes we studied that received E- rate assistance but were not included in FCC’s information on tribal recipients. FCC officials stated that they do not provide a definition because the increased formality might give applicants the incorrect impression that being a “tribal” institution has an effect on funding decisions. However, because FCC does not provide a definition for tribal in its E-rate application, it is unclear what level of tribal involvement or participation in an institution would cause it to be considered “tribal” on an application. For example, applicants may be unsure if a public school district, a private school, or public library that serves the general public on a reservation should indicate it is a tribal recipient on an application even if most students or patrons are tribal members. Further, according to FCC officials, it would be appropriate for such institutions to identify as tribal. Consequently, FCC does not have accurate information on the number of federally recognized tribes or Alaska Native Villages receiving E-rate support, or the amount being provided to them. Without more precise information and direction from FCC, the extent to which E-rate assistance is provided to tribal institutions cannot be reliably determined, nor can FCC rely on the information to develop quantifiable goals and performance measures for improving high-speed Internet access in tribal schools or libraries. It is important to understand how these programs affect tribal institutions because FCC has made improving high-speed Internet access in tribal institutions a priority following the National Broadband Plan, with the establishment of the Office of Native Affairs and Policy in 2010, and its current Strategic Plan. Access to Internet on tribal lands varies but challenges to access and adoption remain. The high costs of infrastructure buildout on tribal lands, which tend to be remote and rugged terrain, work in tandem with tribal member poverty to create a barrier to high-speed Internet expansion on tribal lands. In addition, about half of the tribes we interviewed told us that the lack of tribal members with sufficient bureaucratic and technical expertise is a barrier to increasing high-speed access on tribal lands. FCC’s USF subsidy program and USDA’s RUS grant and loan programs seek to increase high-speed Internet access in underserved areas, including tribal lands, by assisting in building infrastructure and purchasing equipment as well as by paying for the ongoing operation of this infrastructure and equipment. While these programs have been important to improving high-speed Internet access on tribal lands, their efforts to further increase high-speed Internet on tribal lands could be limited by a lack of interagency coordination on training and outreach. Officials from one tribe said that multiple federal programs offering similar grants were confusing and officials from another tribe said that they accessed FCC programs but lacked a strong understanding of the USDA programs designed to increase Internet access. Through better coordination where feasible on joint training efforts to build tribal administrative and technical capacity, FCC and USDA could better ensure that their programs are efficient and remain mutually supportive and accessible to tribal governments. Despite the importance of FCC and USDA programs for expanding high- speed Internet on tribal lands, FCC has not established performance goals and measures related to improving Internet availability. However, data on broadband availability is readily available through the National Broadband Map to measure progress on efforts to improve broadband availability. Further, FCC’s subsidy programs also seek to increase high- speed Internet access on tribal lands, but the E-rate program lacks reliable data specific to institutions on tribal lands as well as goals and performance measures to track the outcomes of efforts on tribal lands. Not defining “tribal” in the E-rate application makes it difficult to measure the program’s impact on tribal lands as not all E-rate recipients serving these areas self-identify as tribal. Gathering such data is important for FCC because The National Broadband Plan has placed a special emphasis on improving access on tribal lands, and internal control standards call for management to be provided with data to determine whether or not it is meeting goals. Without such information, it will be difficult for FCC to determine the extent to which FCC is achieving its goals. To help improve and measure the availability and adoption of high-speed Internet on tribal lands, we recommend that the Chairman of the Federal Communications Commission take the following four actions: Develop joint outreach and training efforts with USDA whenever feasible to help improve Internet availability and adoption on tribal lands; Develop performance goals and measures using, for example, data supporting the National Broadband Map, to track progress on achieving its strategic objective of making broadband Internet available to households on tribal lands; Improve the reliability of FCC data related institutions that receive E- Rate funding by defining “tribal” on the program application; and Develop performance goals and measures to track progress on achieving its strategic objective of ensuring that all tribal schools and libraries have affordable access to modern broadband technologies. We provided copies of the draft report to the Federal Communications Commission, the U.S. Department of Agriculture, The U.S. Department of the Interior, and the U.S. Department of Commerce for comment prior to finalizing the report. We received technical comments that we incorporated as appropriate. We received written comments from FCC, which are reproduced in appendix III. FCC concurred with our recommendations and noted that it has efforts under way to address them. Regarding our recommendation for greater coordination on training and outreach, FCC summarized the areas in which it coordinates with USDA and said that it will continue to work with USDA to ensure more strategic and routine coordination. Regarding our recommendation to develop performance goals and measures for making broadband Internet available to households on tribal lands, FCC summarized its efforts to track broadband deployment on tribal lands. Regarding our recommendation to improve data reliability by defining “tribal” on the E- rate funding application, FCC said that it plans to include guidance for E- rate applicants to self-report as tribal if they serve tribal populations beginning in fiscal year 2017. Regarding our recommendation to develop performance goals and measures to track tribal schools and libraries access to broadband, FCC said that its goal is to provide all schools and libraries with broadband Internet, including tribal schools and libraries and that its efforts will substantially improve the accessibility of modern broadband technologies for tribal schools and libraries. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 5 days from the report date. At that time, we will send copies to FCC, USDA, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6670 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. You asked us to review the availability of broadband access on tribal lands. This report examines (1) perspectives of selected tribes and providers on the importance of Internet access for tribes and any barriers to increasing access to Internet on tribal lands; (2) the level of interrelation and coordination between federal programs at FCC and USDA that promote high-speed Internet access on tribal lands; and (3) existing data and FCC performance goals and measures related to access to Internet service on tribal lands and for tribal institutions. To determine perspectives of selected tribes and providers on the importance of high-speed Internet and any barriers to increasing access to high-speed Internet on tribal lands, we reviewed relevant literature and interviewed officials from 18 tribal governments in the continental United States and 3 Alaska Native regions. For the three Alaska Native regions we interviewed, we visited villages within the region and spoke with officials from the Regional Corporation, regional nonprofit, Village Corporation, tribal government, and city government. Five of the 21 total tribes we interviewed operate their own Internet providers and two were considering forming a tribally-owned provider. We selected tribes to interview using FCC and USDA data from fiscal years 2010 through 2014 and Bureau of the Census (Census) 2013 demographic data such as population and poverty rates. We selected tribes to include a range of population, poverty rates and locations. We used the same semi- structured interview questions for all tribes. While we used the same questions, tribal officials may not have answered them in the same way. Additionally, we interviewed officials from six service providers operating on tribal lands. We selected service providers to interview using FCC High Cost Support data for fiscal years 2010 through 2014 and initial tribal interviews to identify providers that serve tribal lands and receive federal subsidies to do so. Furthermore, we identified and interviewed industry stakeholders such as research groups and telecommunications associations on their views regarding the barrier to increasing access to broadband on tribal lands. These stakeholders were selected based on their exposure to issues on tribal lands such as representing tribally owned service providers. These interviews are not generalizable to all tribes, all service providers or all industry stakeholders. To analyze the information we collected on barriers and potential solutions in our interviews, we identified themes and trends based on a literature review of recent FCC and research organization publications and preliminary interviews and developed a set of codes. After agreeing on the coding strategy and rules for the appropriate use of each code, one reviewer coded each carrier, tribal, and stakeholder interview using the agreed codes. Another team member then reviewed the coding for reasonable adherence to the strategy and rules. We then tallied coded responses and analyzed the themes identified through our interviews to determine the most prevalent challenges and solutions identified by our interviewees. For reporting purposes, we developed a series of indefinite quantifiers to describe the tribal responses from the 21 total tribal entities we interviewed that agreed with statements made in the report. Less than 5 of the 21 is “a few”, 5 to 9 is “some”, 10 to 12 is “about half”, 13 to 16 is “many”, and 17 or more is “most”. To determine the level of interrelation and coordination between federal programs at FCC and USDA that promote high-speed Internet access on tribal lands, we reviewed FCC and USDA program guidance materials and program funding for fiscal years 2010 through 2014, interviewed FCC and USDA officials, and interviewed tribal officials from the selected 21 tribal governments or Alaska Native regions and 6 service providers about the federal government programs in which they participated. We evaluated USF and RUS program coordination based on criteria developed in previous GAO work. First, we identified programs to examine. We selected FCC’s Universal Service Fund (USF) and USDA’s Rural Utilities Service (RUS) due to the high number of programs and the substantial appropriations amounts involved. Second, we gathered background information on these programs and identified relationships among the programs. Third, we identified areas of coordination and possible gaps in coordination. Finally, we communicated these options to FCC and USDA officials to determine the feasibility of our proposed recommendations. To determine what data and FCC performance goals and measures exist related to access to high-speed Internet service on tribal lands and for tribal institutions, we analyzed fiscal year 2010 through fiscal year 2014 USF data from FCC for tribal grantees or use on tribal lands; reviewed USF program applications and guidance materials; reviewed Bureau of Census five year data on telecommunication access from the American Community Survey; and interviewed FCC and Census officials. We determined that FCC and Census data were sufficiently reliable for our purposes by interviewing FCC and Census officials on their data collection and validation efforts. Finally, we reviewed performance goals and measures for USF programs according to criteria established in the Government Performance and Results Act of 1993, as amended and in federal standards for internal control. We conducted this performance audit from February 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Keith Cunningham, Assistant Director; Christopher Jones; Sarah Jones; Jeffery Malcolm; Josh Ormond; Cheryl Peterson; Carl Ramirez; Cynthia Saunders; Michelle Weathers; and Sarah Williamson made key contributions to this report.
High-speed Internet service is viewed as a critical component of the nation's infrastructure and an economic driver, particularly to remote tribal communities. However, in 2015, FCC reported that the lack of service in tribal areas presents impediments. GAO was asked to review the status of high-speed Internet on tribal lands. The report examines (1) perspectives of tribes and providers on high-speed Internet access and barriers to increasing this access; (2) the level of interrelation and coordination between federal programs that promote high-speed Internet access on tribal lands; and (3) existing data and performance measures related to high-speed Internet on tribal lands. GAO visited or interviewed officials from a non-generalizable sample of 21 tribal entities and 6 service providers selected to provide diversity in size, location, and poverty levels. GAO also reviewed FCC and USDA fiscal year 2010 through 2014 program data, funding, and materials and interviewed federal officials. Although all 21 tribes GAO interviewed have some access to high-speed Internet, tribes and providers GAO interviewed cited barriers to increasing access. For example, high poverty rates and the high costs of connecting remote tribal villages to core Internet networks—called middle-mile infrastructure—limit high-speed Internet availability and adoption on tribal lands (see fig.). About half of the tribes GAO interviewed also said that the lack of sufficient administrative and technical expertise among tribal members limits their efforts to increase high-speed Internet access. The Federal Communications Commission's (FCC) Universal Service Fund subsidy programs and the U.S. Department of Agriculture's (USDA) Rural Utilities Service grant programs are interrelated in that they seek to increase high-speed Internet access in underserved areas, including tribal lands. GAO's previous work on overlap, duplication, and fragmentation has shown that interagency coordination on interrelated programs can help ensure efficient use of resources and effective programs. However, FCC and USDA do not coordinate to develop joint outreach and training. This could result in an inefficient use of federal resources and missed opportunities for resource leveraging between FCC and USDA. FCC has placed special emphasis on improving Internet access on tribal lands following the issuance of the National Broadband Plan, which called for greater efforts to make broadband available on tribal lands. However, FCC has not developed performance goals and measures for improving high-speed Internet availability to households on tribal lands. Without these goals and measures FCC cannot assess the impact of its efforts. The National Broadband Map includes data on Internet availability on tribal lands that could allow FCC to establish baseline measures for Internet availability on tribal lands. Further, FCC also lacks performance goals and measures for tribal institutions—such as schools and libraries. Specifically, FCC's E-rate program provides funds to ensure that schools and libraries have affordable access to modern broadband technologies, but FCC has not set any performance goals for the program's impact on tribal institutions. Nor has FCC defined “tribal” on the E-rate application. Without such information, it will be difficult to accurately track progress in making broadband available in tribal institutions. GAO recommends that FCC (1) develop joint training and outreach with USDA; (2) develop performance goals and measures for tribal areas for improving broadband availability to households; (3) develop performance goals and measures for improving broadband availability to tribal schools and libraries; and (4) improve the reliability of FCC data related to institutions that receive E-rate funding by defining “tribal” on the program application. FCC agreed with the recommendations.
Among other things, FSA is responsible for implementing USDA’s direct and guaranteed loan programs. FSA’s county office staff administers the direct loan program and has primary decision-making authority for approving loans. As of September 30, 2001, there were about 95,000 borrowers with direct loans outstanding, with an unpaid principal balance of about $8.5 billion. FSA farm loan managers are responsible for approving and servicing these loans. The factors FSA staff consider in approving or denying a loan include the applicant’s eligibility, (i.e., operates a family-size farm in the area), credit rating, cash flow, collateral, and farming experience. Once a farm loan application is complete, FSA officials have 60 days to approve or deny the application and notify the applicant in writing of the decision. Once FSA approves a direct loan, it helps borrowers develop financial plans; collects loan payments; and, when necessary, restructures delinquent debt. Direct loans are considered delinquent when a payment is 30 days past due. When a borrower’s account is 90 days past due, FSA county staff formally notify him or her of the delinquency and provide an application for restructuring the loan. To be considered for loan restructuring, borrowers must complete and return an application within 60 days. FSA staff process the completed application and notify the borrowers as to whether they are eligible for loan restructuring. If a borrower does not apply or is not eligible for loan restructuring, and the loan continues to be delinquent, FSA notifies the borrower that it will take legal action to collect all the money owed on the loan (called loan acceleration). If the borrower does not take action to settle their account within a certain period of time, FSA can start foreclosure proceedings. When farmers believe that FSA has discriminated against them, they may file a discrimination complaint with USDA’s OCR. For the complaint to be accepted, it must be filed in writing and signed by the complainant; be filed within 180 days of the discriminatory event; and describe the discriminatory conduct of an employee of a USDA agency or discriminatory effect of a policy, procedure, or regulation. Farmers may also seek compensation for violations of their civil rights by filing individual or class action lawsuits. In 1997, African American farmers filed a class action against USDA (Pigford v. Glickman). In 1999, this suit resulted in a multimillion-dollar settlement agreement for the farmers. Since then, women and other minority farmers have also filed class actions against USDA. To elevate the attention of civil rights matters at USDA, in the 2002 Farm Bill the Congress created the position of Assistant Secretary of Agriculture for Civil Rights. Although the average direct loan application processing time was longer for Hispanic farmers than for non-Hispanic farmers during fiscal years 2000 and 2001, over 90 percent of loan applications from Hispanic farmers (and 94 percent from non-Hispanic farmers) were processed within the agency’s 60-day requirement. We also found that the direct loan approval rate for Hispanic farmers was slightly lower than for non-Hispanic farmers, 83 and 90 percent, respectively. FSA officials maintain that approval rate differences were not significant and attribute them to differences in the applicants’ ability to repay the loans they requested. During fiscal year 2000 and 2001, the national average processing time for direct loans from Hispanic farmers was 20 days—4 days longer than for non-Hispanic farmers—but well within FSA’s 60-day requirement. At the state level, loan processing time differences were more distinct. For example, in the four states that account for over half of all Hispanic applications, processing times for Hispanic farmers were faster than for non-Hispanic farmers in three states and slower in the fourth state. However, all times fell well within FSA’s 60-day requirement. Table 1 shows the average processing times for non-Hispanic and Hispanic applications nationwide and for the four states, both fiscal years combined. The vast majority—91 percent—of all direct loan applications from Hispanic farmers were processed within FSA’s 60-day requirement. However, the loan approval rate for Hispanic farmers was lower than for non-Hispanic farmers during this 2-year period—83 and 90 percent, respectively. Nonetheless, as shown in table 2, in three of the four states that received the largest number of Hispanic applications in fiscal year 2001, direct loan approval rates were similar. As part of FSA’s assessment of its civil rights performance, the agency monitors differences between minority and non-minority loan processing times and approval rates at both the national and state levels. In addition, FSA sends teams out to state offices to conduct civil rights reviews. The teams review loan files to verify compliance with FSA policies and procedures and, if warranted, provide written recommendations to remedy problems identified. Up through fiscal year 2001, each state was reviewed once every 3 years; beginning in fiscal year 2002, state offices will be reviewed every other year. As shown in tables 1 and 2, Washington was the only state in our review that had both slower processing times and lower approval rates for Hispanic farmers. This disparity also surfaced during a 2001 FSA field review. Specifically, the final report noted that the time period from the completion of loan applications to the applications’ approval was significantly longer for minorities in three of the four FSA service centers it reviewed. Although the review found that the state properly documented its reasons for rejecting loan applications from minority farmers, FSA recommended that the office director emphasize to staff the importance of treating prospective borrowers equally and of the need to properly document reasons for denying loan requests when there may be the appearance of disparate treatment. While FSA monitors variations in loan processing times and approval rates between minorities and non-minorities, it does not have established criteria for determining when observed variations are significant enough to warrant further inquiry. In addition, while FSA conducts periodic field reviews of state offices’ performance in civil rights matters and suggests improvements, it does not require the offices to implement the recommendations and does not monitor state follow-up efforts. FSA is currently considering requiring state offices to provide information on how they addressed weaknesses noted during reviews. USDA has a policy for issuing stays of foreclosure in cases where discrimination has been alleged in individual complaints filed with OCR, but not in response to individual or class action lawsuits with similar allegations. In cases where individuals file an administrative discrimination complaint with USDA’s OCR, agency policy is to automatically issue a stay of adverse action—including foreclosures—until the complaint has been resolved. During fiscal years 2000 and 2001, this policy was followed in 24 out of the 26 applicable cases involving Hispanic borrowers. The policy was not followed in the remaining two cases because of miscommunication between OCR and FSA in reconciling their respective lists of complainants. When the Farm Service Agency learned that complaints had been filed with the Office of Civil Rights, it stayed its foreclosure actions, and, as of August 2002, no further collection actions were taken against the two farmers. Although future data system improvements should alleviate this problem, OCR and FSA officials acknowledge that improvements could be made in the interim. USDA does not have a similar policy for issuing stays related to discrimination claims raised in an individual or class action lawsuit. Instead, FSA makes decisions on whether to issue stays on a case-by-case basis based on the advice of USDA’s General Counsel and the Department of Justice. Since 1997, USDA has issued stays of foreclosures related to African-American and Native American farmers’ class action discrimination lawsuits involving FSA loan programs. In contrast, USDA did not issue stays of foreclosure for other class action discrimination lawsuits involving FSA loan programs because the agency believes that they circumstances did not warrant a stay. These class action lawsuits and how USDA handled stays of foreclosure are discussed in greater detail below. In October 1997, African-American farmers filed a class action lawsuit against the Secretary of Agriculture (Pigford v. Glickman) alleging racial discrimination by USDA in its administration of federal farm programs. On October 9, 1998, the court certified the class—issued the criteria for class eligibility. On January 5, 1999, USDA entered into a 5-year consent decree with the claimants of the suit to settle it. The federal district court approved the consent decree and a framework for the settlement of individual claims in April of the same year. As of July 31, 2002, almost 23,000 claims had been filed under the consent decree. Of those, 21,539 were accepted for processing, and 1,146 claims were rejected based on a determination that the claimant was not a member of the class. As part of the consent decree, USDA agreed to refrain from foreclosing on real property owned by a claimant or accelerating their loan account. In November 1999, Native American farmers filed a class action lawsuit against the Secretary of Agriculture (Keepseagle v. Glickman) alleging that USDA willfully discriminated against Native American farmers and ranchers when processing applications for farm credit and farm programs. Further, claimants alleged that class members previously filed discrimination complaints with USDA and that the department failed to thoroughly investigate the complaints. In December 1999, USDA issued a notice to FSA offices informing them that they were not to accelerate or foreclose on any direct loans held by Native American borrowers before the end of 2000, unless the national office, with the concurrence of the Office of General Counsel, specifically authorized such action against an individual. As scheduled, this directive expired at the end of 2000. In October 2000, Hispanic farmers (Garcia v. Glickman) and women farmers (Love v. Glickman) each filed class action lawsuits against USDA alleging similar claims that USDA willfully discriminated against them in processing applications for farm credit and farm programs. Specifically, they alleged that loans were denied, provided late, or provided with less money than needed to adequately farm. In addition, the plaintiffs alleged that when they filed discrimination complaints about the handling of their loan applications, USDA failed to investigate them. The department has not issued stays of foreclosure in either of these lawsuits. In June 2001, USDA’s Acting General Counsel wrote a memo that explained the department’s reasoning for issuing stays of foreclosure in response to some class action lawsuits, but not others. The memo stated that the stay of foreclosure agreement included in the Pigford consent decree was reached only in the context of litigation and only to settle a lawsuit in which a class action had already been certified by the district court. The memo went on to say that the stay of foreclosure policy issued in response to the Keepseagle lawsuit was implemented during the infancy of the lawsuit while USDA and the Department of Justice evaluated how to proceed in defending it. In addition, the memo stated that USDA did not intend to continue a stay of foreclosure beyond the evaluation. Further, the Acting General Counsel wrote that in all three of the pending lawsuits—Keepseagle, Garcia, and Love—no adequate factual bases have been alleged to support the claim of discrimination made even by most of the named plaintiffs. As a result, the department saw no reason to implement a policy to halt foreclosures and other similar actions affecting borrowers potentially involved in these lawsuits. As of August 2002, a class has been certified for the Keepseagle lawsuit, but not for the Garcia suit. USDA has not issued any further stays of adverse action for participants in any of these lawsuits. Although USDA has not issued a stay of foreclosure for potential class members in Garcia, relatively few Hispanic farmers have been affected by this decision. According to our survey results, FSA accelerated the direct loans for almost 1,500 borrowers during fiscal years 2000 and 2001; only 41 of these borrowers were Hispanic. Six of these 41 farmers also had their loans foreclosed on by FSA during this period. In addition to these 41 borrowers, 10 other Hispanic borrowers who had their loans accelerated in prior years were foreclosed on during fiscal years 2000 and 2001. To put these figures into context, during this period, FSA foreclosed on approximately 600 borrowers, 16 (or 3 percent) of whom were Hispanic. During this period, Hispanic farmers made up about 4 percent of the agency’s direct loan portfolio. FSA does not maintain historic information on accelerations or foreclosures in a manner for this information to be retrieved or analyzed readily. FSA officials acknowledged that such information is needed in light of the frequent charges of discrimination it faces. Despite implementing many improvements recommended by USDA’s Inspector General and task forces, OCR has only made modest progress in its timely processing of complaints. Additional progress has been hindered because OCR has yet to address underlying, severe human capital problems. In addition, USDA’s criterion for timely processing only covers a portion of the three major stages of complaint processing. OCR officials acknowledge that without time requirements that address all phases of processing, it lacks a meaningful way to measure timeliness or to identify and address problem areas and staffing needs. OCR has adopted many recommendations made in the past by USDA’s Inspector General and agency task forces. For example, in 2000, a USDA task force identified 54 tasks to help address problems with the OCR’s organization and staffing, database management, and complaint processing. As of July 2002, the office has fully implemented 42, or nearly 80 percent, of these recommendations and plans to complete actions on most of the others by October 2002. In addition, OCR has made some organizational modifications—such as creating separate employment and program directorates, which report under separate lines of supervision, and adding three new divisions to the current structure—Program Adjudication, Program Compliance, and Resource Management Staff. Further, from the beginning of fiscal year 2000 to the end of fiscal year 2001, OCR has made significant progress in reducing its inventory of complaints from 1,525 to 594. Despite these actions, however, OCR continues to fail to meet USDA’s requirement that program complaints be processed in a timely manner. Specifically, USDA’s internal requirements direct OCR to complete its investigative reports within 180 days after accepting a discrimination complaint. However, during fiscal years 2000 and 2001 it took OCR on average 365 days and 315 days, respectively, to complete its investigative reports. Furthermore, as shown in figure 1, the 180-day requirement only covers a portion of the three major stages of the entire processing cycle. Accordingly, even if the 180-day requirement was met, it could still take OCR 2 years or more to complete the processing of a complaint. In fact, when all phases of the complaint resolution are accounted for, it took OCR an average of 772 and 676 days for fiscal years 2000 and 2001, respectively, to completely process complaints through the entire complaint cycle and issue the final agency decision. OCR has made only modest progress in improving its timely processing of complaints because it has yet to address severe, underlying human capital problems. According to USDA officials, the office has had long-standing problems in obtaining and retaining staff with the right mix of skills. The retention problem is evidenced by the fact that only about two-thirds of the staff engaged in complaint processing in fiscal year 2000 was still on board 2 years later. OCR officials also pointed out that this staffing problem has been exacerbated because management and staff have been intermittently diverted from their day-to-day activities by such things as responding to requests for information from the courts. OCR officials stated that this pattern of disruption has been continuous since 1997. Furthermore, severe morale problems have exacerbated staff retention problems and have adversely affected the productivity of the remaining staff. Management officials told us that they spend an inordinate amount of time and resources addressing internal staff complaints. In fact, during fiscal years 2000 and 2001, OCR had one of the highest rates within USDA of administrative complaints filed by employees. This atmosphere has led to frequent reassignments or resignations of OCR managers and staff. According to OCR’s Deputy Director of Programs, the problem has reached the point where some staff have even threatened fellow employees or sabotaged their work. Although OCR’s Director believes that the situation has improved over the past few years, he acknowledges that some of the more serious morale problems have not been resolved. The purpose of USDA’s direct loan program is to provide loans to farmers who are unable to obtain private commercial credit. Over the past decade, USDA has continuously been faced with allegations of discrimination in its making direct loans to farmers. To help guard against such charges, FSA needs to improve its monitoring and accountability mechanisms and make its systems and decision processes more consistent and transparent. Although FSA monitors variations in loan processing times and approval rates, it lacks criteria for determining when discrepancies warrant further inquiry. Similarly, while FSA conducts periodic reviews of its state offices’ civil rights conduct and makes suggestions for improvement, it cannot ensure that these suggestions have been effective—or even adopted— without a requirement that state offices implement its recommendations or if not, explain their reasons for not doing so. In addition, USDA has also been criticized for its handling of the allegations themselves—whether they were handled through litigation or the agency’s complaint processes. In the case of class action lawsuits, the agency has been charged with treating different minority groups inequitably because it grants stays of foreclosures to some groups but not to others. Without a standard, transparent policy that lays out the factors USDA considers in deciding whether or not to issue stays, the agency faces the continued problem of having its decisions viewed as unfair. Furthermore, if USDA does not improve its process of reconciling its lists of complainants, it runs the risk of violating its policy of not taking foreclosure actions against farmers with pending discrimination complaints. In addition, without maintaining historical information on foreclosures, USDA lacks an important tool to help it understand its equal opportunity performance. In the case of USDA’s processing of complaints, its Office of Civil Rights continues to be untimely. Also, without a time requirement that covers all stages of complaint processing, USDA lacks a meaningful way to measure performance or to identify and remedy problem areas and staffing needs. Furthermore, until USDA addresses long-standing human capital problems within OCR, it is unlikely that the timeliness of complaint processing will significantly improve. To help resolve issues surrounding charges of discrimination in FSA’s direct loan program, we recommend that the Secretary of Agriculture establish criteria for determining when discrepancies between minority and non-minority loan processing times and approval rates warrant further inquiry; and require state offices to implement recommendations made as a result of FSA field reviews or explain in writing their rationale for not doing so. To help address problems related to FSA foreclosures, we recommend that the Secretary of Agriculture develop and promulgate a policy statement that lays out the factors USDA considers in issuing stays of foreclosure in class action lawsuits; maintain historic information, by race, on foreclosures completed by direct FSA and OCR to improve communications to ensure that foreclosure actions are not taken against borrowers with pending complaints. To help address long-standing problems related to OCR’s untimely processing of complaints, we recommend that the Secretary of Agriculture establish time requirements for all stages of the complaint process and monitor OCR’s progress in meeting these requirements; and develop an action plan to address ongoing problems with obtaining and retaining staff with needed skills, establish performance measures to ensure accountability, and monitor OCR’s progress in implementing the plan. We provided a copy of a draft of this report to USDA’s Farm Service Agency, Office of General Counsel, and Office of Civil Rights for their review and comment. FSA and OGC generally agreed with the information in the report and provided technical and clarifying comments. We have incorporated these comments, as appropriate. OCR commented that they were in general agreement with our recommendations but wanted us to give more prominence to the progress it has made in notifying FSA about filed complaints, improving complaint processing, and addressing morale problems. We have revised the report to more clearly reflect OCR progress in certain areas. These comments and our response are presented in appendix II. To compare the processing times for direct loans for Hispanic farmers with those for non-Hispanic farmers, we analyzed FSA data and obtained FSA officials’ explanations for differences we observed. To analyze USDA’s policies for staying foreclosures and how they have been implemented, we obtained relevant USDA policies and memoranda, and, through file reviews (in California, Texas, New Mexico, and Washington), determined the extent to which these policies were followed. To assess USDA’s progress in addressing previously identified problems associated with slow processing of discrimination complaints and resolution of human capital issues within USDA’s Office of Civil Rights, we reviewed USDA status reports and obtained senior managers’ views on why previously identified problems persist. (App. I contains a more detailed discussion of our scope and methodology.) We performed our review from October 2001 through August 2002 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to congressional committees with jurisdiction over farm programs, the Secretary of Agriculture, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202)-512-3841. Key contributors to this report are listed in appendix IV. To compare the processing times for direct loans for Hispanic farmers with those for non-Hispanic farmers, we interviewed FSA officials at the national, state, and county level about the types of direct loans that FSA provides as well as the steps that are followed in the loan-making process. We also reviewed FSA regulations and procedures related to direct loan processing. Because of completeness and reliability issues with FSA’s direct loan data, we were not able to perform detailed analyses of loan processing times for Hispanic and non-Hispanic farmers using a download of FSA loan data. Instead, we analyzed direct loan processing times using FSA reports based on historical data for fiscal years 2000 and 2001. We calculated loan processing times from the date the farm loan application was complete to the date of the agency decision to approve or reject the loan application. We compared the average processing times for all complete applications from Hispanic farmers to those from non-Hispanic farmers. We also calculated loan approval rates using FSA historical loan data. We were unable to provide information about the loan amount requested and received by borrowers for comparison purposes because this data has not been tested by FSA for completeness and reliability. To identify USDA’s policies for staying foreclosures and to determine how they have been implemented, we interviewed officials from USDA’s Office of Civil Rights, Office of General Counsel, FSA’s Civil Rights staff, and FSA state offices. We reviewed policies and procedures for implementing stays of foreclosure, where available. In those instances where written guidance was not available, we relied on interviews with officials from USDA’s Office of General Counsel and written correspondence regarding the department’s actions. In reviewing FSA’s implementation of its stay of foreclosure policy in response to administrative complaints, we limited our work to the four states that received the largest number of Hispanic loan applications during fiscal year 2001—California, New Mexico, Texas, and Washington. To identify Hispanic farmers who had had filed discrimination complaints against FSA and whose complaints were processed during fiscal years 2000 and 2001, we obtained a list of Hispanic farmers from the OCR and reviewed available FSA state office direct loan and complaint files to determine whether the FSA farm loan chiefs had been notified when a farmer had filed a complaint and whether or not FSA had implemented a stay of adverse action. In addition, we followed up with FSA’s Office of Civil Rights, with regard to those complainants who did not have a state loan file or a stay of adverse action notice in the state complaint file, to determine whether the office had sent out notices to stay adverse actions. To obtain previously unavailable national data for fiscal years 2000 and 2001 about the number of FSA accelerations and foreclosures of direct loans made to Hispanic and non-Hispanic farmers, we surveyed FSA Farm Loan Chiefs in all 50 states, as well as Guam, the Virgin Islands, and Puerto Rico. The response rate to our survey was 100 percent. To assess USDA progress in addressing previously identified problems with its civil rights office’s organizational structure, staff turnover, and complaint processing times, we reviewed reports from USDA’s Office of Inspector General, internal agency task forces, the U.S. Commission on Civil Rights, and the Congress. We discussed problems and recommended remedies with officials from OCR and FSA. We also examined budget justification documents, USDA departmental regulations, and OCR procedures. Due to problems with OCR’s program complaint database, we relied on, but were unable to verify, processing information published in USDA’s annual program performance reports for fiscal years 2000 and 2001. As noted in the 2001 report, USDA modified the method it used for calculating processing times that year. If its prior method had been used, processing times would have increased by 14 percent. We conducted our review from October 2001 through August 2002 in accordance with generally accepted auditing standards. The following are GAO’s comments on the Office of Civil Rights’ letter dated September 11, 2002. 1. Since early 2000, OCR has coordinated on a monthly basis with FSA to reconcile their respective lists of complainants. However, OCR’s Long Term Improvement Plan (LTIP)—issued in October 2000—noted that current procedures had not ensured that FSA was notified about newly filed complaints in time to prevent foreclosures or other adverse actions against complainants. In addition, one of the cases we noted in our report occurred in 2001—well after the implementation of the monthly meetings. When asked about this and another case, FSA officials told us that the current procedures still needed improvement. (As we noted in the report, foreclosure actions were halted once FSA was informed that OCR had accepted the complaints.) Given the importance of halting foreclosure actions once a complaint has been filed, we believe that OCR and FSA need to improve communications about borrowers with pending complaints. 2. We have added information about OCR’s reduction of its inventory of complaints. However, unless OCR reduces the time it takes to process complaints, the inventory will expand once again. While we acknowledged that OCR has made modest progress in reducing its processing time, it still exceeds its own interim goals for timeliness by 75 percent in fiscal year 2001. 3. The seven essential needs cited by OCR, for the most part, involve improving the office’s work processes. Although these improvements should indirectly help improve morale, they do not directly address the severe problems cited by the Deputy Director, such as staff threatening fellow employees or sabotaging their work. We revised the report to reflect the director’s belief that the situation has improved over the past several years and his acknowledgment that some of the more serious morale problems have yet to be resolved. 4. During the course of our review, several senior OCR managers referred to the increased workloads created by the courts’ requests for files and other information needed to resolve pending lawsuits. In addition, OCR’s October 2000 LTIP noted that investigative staff had been assigned to a variety of non-investigative projects, which delayed the processing of complaints. We have removed the reference regarding the Equal Employment Opportunity Commission. 5. Our report focused on the timeliness of processing program complaints and not on EEO complaints filed by USDA employees. 6. GAO did not mean to imply that OCR’s productivity is declining. Rather, we are making the point that serious morale problems adversely affect productivity and have revised the report accordingly. While the number of EEO complaints filed by OCR employees has declined between fiscal years 2000 and 2001, OCR continues to have one of the highest complaint rates within USDA. In addition to those named above, Natalie H. Herzog, Jacqueline A. Cook, Lynn M. Musser, Robert G. Crystal, and George H. Quinn Jr. made key contributions to this report.
The Farm Service Agency (FSA) runs a direct loan program that provides loans to farmers who are unable to obtain private commercial credit to buy and operate farms. FSA is required to administer this program in a fair, unbiased manner. GAO found that during fiscal years 2000 and 2001, FSA averaged 4 days longer to process loan applications from Hispanic farmers than it did for non-Hispanic farmers: 20 days versus 16 days. However, the processing times in three of the four states with the highest number of Hispanic borrowers was faster that it was for non-Hispanic borrowers in those states. FSA's direct loan approval rate was somewhat lower for Hispanic farmers than for non-Hispanic farmers nationwide--83 and 90 percent, respectively. The Department of Agriculture's (USDA) policies for staying foreclosures when discrimination has been alleged depend on the method used to lodge complaints. When an individual has a discrimination complaint accepted by USDA's Office of Civil Rights (OCR), FSA's policy is to automatically issue a stay of foreclosure until the complaint has been resolved. A GAO survey revealed that during fiscal years 2000 and 2001, FSA foreclosed on the loans of 600 borrowers nationwide. Although Hispanic farmers make up 4 percent of the agency's direct loan portfolio, 3 percent of these foreclosures involved Hispanic farmers. OCR has made modest progress in the length of time it takes to process discrimination complaints. USDA requirements direct OCR to complete its processing up through the investigative phase of complaints within 180 days of acceptance. It does not, however, have a time requirement for all of the phases of complaint processing.
Created in 1961 to counter hijackers, the organization that is now FAMS was expanded in response to the September 11, 2001, terrorist attacks. On September 11, 2001, 33 air marshals were operating on U.S. flights. In accordance with the Aviation and Transportation Security Act (ATSA), enacted in November 2001, TSA is authorized to deploy federal air marshals on every passenger flight of a U.S. air carrier and is required to deploy federal air marshals on every flight determined by the Secretary of Homeland Security to present high security risks—with nonstop, long distance flights, such as those targeted on September 11, 2001, considered a priority. Since the enactment of ATSA, FAMS staff grew significantly and, as of July 2016, FAMS employed thousands of air marshals. FAMS received an increase in appropriations each fiscal year from 2002 through 2012—peaking at an appropriation of approximately $966 million in fiscal year 2012. However, since 2012, FAMS has experienced a reduction in amounts appropriated. Specifically, FAMS received appropriations amounting to approximately $908 million in fiscal year 2013, $819 million in fiscal year 2014, and $790 million in fiscal year 2015. Of these appropriations, TSA expenditures for FAMS training were about $1.7 million, $4.4 million, $6 million, and $4.8 million in fiscal years 2012, 2013, 2014 and 2015, respectively. According to FAMS officials, due in part to reductions in its appropriations, FAMS hired no new air marshals during fiscal years 2012 through 2015. However, FAMS received appropriations amounting to $805 million for fiscal year 2016 (an increase of about $15 million over fiscal year 2015) and hired new air marshals in fiscal year 2016. FAMS and TSA’s OTD share responsibility for providing training to federal air marshals. OTD is primarily responsible for designing, developing, and evaluating all the training courses that air marshals receive. In addition, OTD delivers the training programs that are offered at TSATC and oversees training instructors assigned there. These training programs include, among others, FAMTP, discussed later in this report, and the field office training instructor program that is taught at TSATC. FAMS, in collaboration with OTD, develops training requirements for air marshal candidates and incumbent air marshals, serves as a subject matter expert to OTD in developing and evaluating new or proposed training courses, as well as operates and oversees the FAMS recurrent training program, which is taught by training instructors within each FAMS field office. To ensure that air marshals are fully trained and can effectively carry out FAMS’s mission, TSA established FAMTP. Air marshal candidates are required to successfully complete 16 and one-half weeks of training. After an initial one-week orientation at TSATC, air marshal candidates complete FAMTP in two phases. FAMTP-I is a seven week course in which new hires learn basic law enforcement skills at the Federal Law Enforcement Training Center in Artesia, New Mexico. On completing FAMTP-I, FAMS candidates complete FAMTP-II—an eight and one-half- week course at TSATC that is intended to teach air marshal candidates the knowledge, skills, and abilities necessary to prepare them for their roles as federal air marshals. Once air marshal candidates graduate from FAMTP-II, they report for duty at their assigned field office. As incumbent air marshals, they are required to complete 160 hours of recurrent training courses annually. FAMTP-II courses serve as the core of the recurrent training courses and incumbent air marshals receive these courses from training instructors in training facilities in or near their respective field offices. These recurrent training courses are intended to ensure air marshals maintain and enhance perishable tactical skills that are deemed critical to the success of FAMS’s mission. FAMS recurrent training includes both mandatory refresher courses that all air marshals must complete every year as well as a broad set of courses within several disciplines, as shown in figure 1, that field offices must ensure are incorporated into their annual or quarterly training plans. The mandatory courses include use of force, off-range safety, fire extinguisher use, and baton use. The remainder of air marshals’ annual recurrent training hours must include courses within each of the FAMS training disciplines, such as defensive measures, firearms, mission tactics, and physical fitness. FAMS also requires air marshals to pass quarterly firearms qualifications, complete biannual fitness assessments, and pass periodic medical exams, which are discussed later in this report. In October 2010, DHS issued its Learning Evaluation Guide to help the department’s learning and development community evaluate the effectiveness of its training activities. Among other things, the guidance identifies the Kirkpatrick model—a commonly accepted training evaluation model that is endorsed by the Office of Personnel Management in its training evaluation guidance—as a best practice. This model is commonly used in the federal government. The Kirkpatrick model consists of a four- level approach for soliciting feedback from training course participants and evaluating the impact the training had on individual development, among other things. The following is a description of what each level within the Kirkpatrick model is to accomplish: Level 1: The first level measures the training participants’ reaction to, and satisfaction with, the training program. A level 1 evaluation could take the form of a course survey that a participant fills out immediately after completing the training. Level 2: The second level measures the extent to which learning has occurred because of the training effort. A level 2 evaluation could take the form of a written exam that a participant takes during the course. Level 3: The third level measures how training affects changes in behavior on the job. Such an evaluation could take the form of a survey sent to participants several months after they have completed the training to follow up on the impact of the training on the job. Level 4: The fourth level measures the impact of the training program on the agency’s mission or organizational results. Such an evaluation could take the form of comparing operational data before and after a training modification was made. TSA’s primary method for assessing air marshals’ training needs is by holding Curriculum Development Conferences (CDC) and Curriculum Review Conferences (CRC). Specifically, OTD holds CDCs to determine whether to approve proposals to develop new training courses, and convenes CRCs to evaluate the effectiveness of existing FAMTP courses and, if appropriate, to make recommendations to address any identified shortcomings. These conferences are composed of OTD officials responsible for developing and implementing FAMTP courses and relevant subject matter experts, such as training instructors, field office Supervisory Air Marshals-in-Charge (SACs), SFAMs, and air marshals. According to OTD guidance and consistent with Federal Law Enforcement Training Accreditation Board standards, CDCs are to be held prior to the development of new training programs and CRCs are to be held no less than every three years or sooner if directed by FAMS management. CDCs can also be held in response to directives from OTD or FAMS management and to requests for additional training from FAMS personnel. As part of the CDCs and CRCs, OTD conducts assessments to determine the extent that existing FAMTP courses are current with existing or planned FAMS policy, procedures, or new equipment or technology, and address the known threat environment, and air marshals’ training needs. When doing so, OTD considers various sources of information including, among others, its job task analysis; training-related concerns raised by field office focus groups; feedback from air marshal candidates, training instructors and other subject matter experts; and intelligence. According to OTD guidance, this information is primarily to be gathered as described below. TSATC Training Evaluation Surveys: OTD’s student critique review program evaluates FAMTP training courses delivered at TSATC consistent with Kirkpatrick levels one and three. Under this program, OTD solicits and reviews feedback from air marshal candidates on the quality of the FAMTP courses that they complete at TSATC and from newly graduated air marshals on the extent that these courses effectively prepare them for their duties. Specifically, consistent with Kirkpatrick level 1, OTD requires air marshal candidates to complete a course evaluation on the effectiveness of the course and the quality of the instructor and facility immediately after completing the course. Further, consistent with Kirkpatrick level 3, TSATC surveys newly graduated air marshals 10 to 12 months after they have graduated from FAMTP-II, and their supervisors within 12 months of their graduation, to obtain their feedback on the extent that the training adequately prepared the FAMTP graduates to successfully perform their mission. In addition, the program provides OTD with feedback from air marshal candidates and newly graduated air marshals on the effectiveness of FAMTP curriculum, instructor performance, and TSATC facility or safety, or other related issues. This feedback is used by CDCs and CRCs to identify training gaps and determine how to appropriately address them. However, as described later in this report, response rates by air marshals on these surveys have been low. TSATC Examinations and Simulations: Consistent with Kirkpatrick level 2, OTD requires air marshal candidates to pass written exams or job simulations in order to advance through FAMTP. Specifically, air marshal candidates must demonstrate that they possess the knowledge, cognitive, or physical skills that classroom courses are intended to impart by passing examinations. OTD has developed evaluation tools, such as checklists, that TSATC training instructors must use to objectively determine air marshal candidates’ proficiency in law enforcement tactics and techniques such as marksmanship, defensive tactics, arrest procedures, and decision-making. OTD collects and analyzes the data on newly hired air marshals’ performance to determine the extent to which air marshal candidates have mastered the learning objectives with each FAMTP course and to identify any areas in the curriculum that may need revision. For example, OTD officials stated that they may revise examination questions in response to a relatively high number of air marshal candidates failing a question or series of questions due to poor wording. Furthermore, OTD uses these data to identify and address any training needs not met by the existing FAMTP curriculum when carrying out CRCs and CDCs. In addition to using surveys and examinations when evaluating FAMTP curriculum provided at TSATC, OTD officials noted additional information sources they use when evaluating FAMTP curriculum, including field office training assessment teams and quarterly training teleconferences. Field Office Training Assessment Teams: OTD established field office assessment teams, which consist of TSATC instructors, to assess field office training programs and their instructors. As described earlier, field office training programs primarily provide the recurrent training that incumbent air marshals are required to fulfill each year. In advance of the assessment team’s visit, TSATC sends surveys to supervisors and air marshals in the field with questions on the effectiveness of the field office’s training program, including its training instructors and facilities, as well as FAMS’s training curriculum. According to OTD officials, when conducting assessments at the field offices, team members are to observe field office trainers in class to ensure that FAMTP courses are taught uniformly across all FAMS field offices. They also are to review the field office’s training records and policies and procedures to ensure the field office’s training program is in compliance with OTD and FAMS policies, and when necessary, to make recommendations for improvement. For example, OTD officials told us that an assessment team discovered a field office whose training staff were using unapproved “dynamic fighting” tactics to teach air marshals how to fend off multiple attackers when cornered, which had resulted in many severe injuries. In this case, the assessment team halted use of the unapproved scenarios and provided approved lesson plans that taught air marshals to counter multiple attackers. OTD officials stated that training assessment team visits also provide opportunities for TSATC trainers to engage directly with field office trainers and air marshals to share new best practices and identify any unmet training needs. However, as we discuss later in this report, OTD has not sent assessment teams to evaluate field office training programs since March 2013. Quarterly Training Teleconferences: OTD holds quarterly conference calls between TSATC staff, FAMS headquarters training staff, and field office training staff to discuss service-wide training issues. According to OTD officials, these teleconferences provide opportunities to elicit feedback from trainers on unmet training needs and any challenges in delivering training, and to share best practices among the field offices. OTD conducts surveys to obtain feedback from air marshal candidates and newly graduated air marshals on the effectiveness of FAMTP courses they complete at TSATC and the quality of TSATC trainers and facilities, consistent with Kirkpatrick levels 1 and 3. However, OTD does not also obtain such feedback from incumbent air marshals after they complete their recurrent training courses at their respective field offices. Our previous work on federal training programs, as well as DHS’s Learning Evaluation Guide, has found that implementing a balanced multi-level systematic approach to evaluate and develop training, such as the Kirkpatrick model, can provide agencies with varied data and perspectives on the effectiveness of training efforts necessary to identify problems and improve training and development programs as needed. In addition, our work has also shown that agencies should ensure that they incorporate a wide variety of stakeholder perspectives in assessing the impact of training on employee and agency performance. OTD officials stated that conducting level 1 and 3 evaluations for air marshal candidates and newly graduated air marshals has provided sufficient feedback to reliably identify all air marshals’ training needs because the agency has taken steps to ensure that the content and quality of training for air marshals candidates is identical to that of recurrent training for incumbent air marshals. However, FAMS did not hire any new air marshals from fiscal years 2012 through 2015. As a result, TSA has not systematically gather feedback on the effectiveness of FAMTP training curriculum from air marshals for approximately four years. Over this time period, OTD has revised the training curriculum, such as adding a course on personal security when overseas and expanding the number of courses within the legal and investigative discipline to cover all transportation modes. Moreover, while the minimum skill requirements may be the same for both air marshal candidates and incumbent air marshals in the field, the training needs for both groups may not necessarily be identical. With greater experience in carrying out missions, incumbent air marshals may have a better idea of their training needs than air marshal candidates or newly graduated air marshals, which could result in more experienced incumbent air marshals providing different feedback on the quality of the training. Further, although incumbent air marshals take many of the same training courses as air marshal candidates, they do so at different facilities and with different instructors. OTD officials also stated that field office training assessments and quarterly training teleconferences provide additional opportunities to both ensure that the training all air marshals receive is standardized across the service and to obtain incumbent air marshal feedback. However, OTD has not sent assessment teams to evaluate field office training programs since March 2013 due, in part, to a lack of resources. OTD officials reported that they plan to resume field office training assessments during fiscal year 2017 and conduct assessments at 10 FAMS field offices per year if sufficient funding is available. These officials also reported that OTD plans to increase the frequency of training teleconferences between TSATC and field office training programs from a quarterly to monthly basis and invite field office leadership—SACs and Assistant Supervisory Air Marshals-in-Charge (ASACs)—to participate in these meetings. Nevertheless, our review suggests that OTD could benefit from broadening its efforts to gather feedback on recurrent training courses. First, field office staff we interviewed at the seven field offices we visited stated that improvements to training could better prepare them for their roles. For example, SFAMs and training staff in four of the seven field offices we visited stated that the training curriculum is overly focused on the training needs of air marshal candidates and newly graduated air marshals. Staff from five of the seven field offices also identified advanced training courses beyond those currently provided that they believed should be offered to incumbent air marshals, in areas such as firearms, defensive, or medical training. Second, field office staff at all seven field offices we visited identified training that should be revised, expanded, or added, to include topics such as active shooter response, counter surveillance and behavior detection techniques, training on improvised explosive devices and other explosives, and expanded legal and investigative training, among others. These sources also told us that the curriculum did not adequately address changes in their responsibilities over time, which include a broader set of current threats such as improvised explosive devices or FAMS-specific training on active shooters. OTD officials stated that they believed the current FAMTP curriculum adequately addresses the types of additional training that field office staff identified and that the curriculum has been designed to meet the needs of air marshals at all experience levels and may be consistently and safely delivered to the entire workforce. However, without a mechanism to systematically collect and incorporate feedback on field- based training for incumbent air marshals, consistent with Kirkpatrick level 1 and 3, OTD could miss important opportunities to identify problems and improve overall training and development. When OTD administered surveys to obtain feedback on the FAMTP-II and field-based training, the response rates were substantially lower than the 80 percent rate OMB encourages for federal surveys that require its approval. Specifically, about 19 to 38 percent of air marshals that graduated from FAMTP-II and their supervisors responded to the surveys that TSATC administered from 2009 through 2011—the last 3 full years in which FAMS hired air marshals. Additionally, according to OTD officials, the combined response rates for the surveys that training assessment teams conducted from June 2012 through March 2013 was about 16 percent. OTD staff acknowledged that the response rate to these surveys have been consistently low, but stated that the low response rates have not significantly affected the usefulness of the surveys. According to OTD staff, with regard to the FAMTP-II surveys, they received a sufficient number of responses to successfully evaluate the extent that FAMTP courses have met all air marshals’ training needs. However, OMB guidance stipulates that agencies must design surveys to achieve the highest practical rates of response to ensure that the results are representative of the target population and that they can be used with confidence as input for informed decision-making. The guidance also states that response rates are an important indicator of the potential for a bias called nonresponse bias, which could affect the accuracy of a survey’s results. In general, as a survey’s response rate increases, the likelihood of a bias problem decreases, and, therefore, the views and characteristics of the target population are more accurately reflected in the survey’s results. OMB guidance also describes the methods agencies can use to improve the response rate of future surveys, including conducting outreach to groups of prospective respondents, ensuring that the survey is well- designed and brief, providing alternative modes to provide responses, conducting nonresponse follow-up efforts, and extending cut-off dates for survey completion. OTD officials reported that they have taken several of these actions to improve the response rates of the FAMTP-II surveys, but have had little success in improving their response rate. Specifically, officials stated that TSATC instructors and staff discussed the surveys and their importance to improving future course offerings in class. In addition, OTD officials reported designing the survey to be as brief as possible, making it accessible via the internet and air marshals’ handheld devices, sending out follow-up reminders to survey respondents via e-mail and telephone, and contacting non-respondents’ field office supervisors. OTD officials told us that the low response rates may be attributable to “survey fatigue” given the high number of surveys that TSA employees are asked to complete and stated that there was little more that they could do to persuade air marshals to respond. Although OTD officials reported taking several of the actions that OMB recommends for agencies to improve survey response rates, additional actions could improve the response rate of future OTD surveys, including those administered to the air marshals FAMS hired this year. For example, monitoring future survey response rates by field office could help OTD identify and then target extra follow-up efforts to air marshals and their supervisors in field locations that have comparatively low response levels. Further, extending the cut-off date for air marshals and their supervisors to respond to the survey, or requiring survey respondents to complete the surveys, could help improve response rates to future surveys. Until OTD achieves sufficient response rates, OTD cannot be reasonably assured that the feedback it received represents the full spectrum of views held by air marshals or their supervisors. Achieving an adequate response rate is important, particularly as FAMS’s CRC and CDC processes rely, in part, on the survey results to identify training gaps and determine how to appropriately address them. FAMS relies on its recurrent training program to help ensure incumbent air marshals’ mission readiness, but additional actions could strengthen FAMS’s ability to do so. First, FAMS does not have complete and timely data on the extent to which air marshals have fulfilled their recurrent training requirements. Second, FAMS evaluates incumbent air marshals’ proficiency in some, but not all, key skills using tools such as examinations or checklists. In addition, FAMS has established a new health, fitness, and wellness program as part of its recurrent training program—in part to address recent concerns with air marshals’ fitness and injury rates—but it is too early to gauge the program’s effectiveness. As shown in figure 2, FAMS requires air marshals to complete certain recurrent training requirements on a regular basis to ensure that air marshals maintain their proficiency in the knowledge, skills, and abilities that are needed to successfully carry out FAMS’s mission. However, FAMS does not have complete and timely data to ensure air marshals’ compliance with these training requirements. Senior OTD and FAMS officials responsible for developing and overseeing the recurrent training program, as well as field office SFAMs, training instructors, and air marshals at the field offices we visited, identified the importance of the FAMS training program to ensuring air marshals’ mission readiness. These personnel stated that air marshals are unique among their fellow law enforcement officers because air marshals lack regular on-the-job opportunities to actively utilize the knowledge, skills, and abilities they develop in training courses to address a key aspect of FAMS’s mission—defeating terrorist or other criminal hostile acts. Therefore, according to OTD and FAMS officials, FAMS ensures air marshals’ mission readiness by monitoring the extent to which they have completed their recurrent training requirements. According to FAMS policy, field office SACs or their designees are responsible for ensuring that air marshals assigned to them have completed their recurrent training requirements and that the completion of these requirements is recorded in FAMS’s database—Federal Air Marshal Information System (FAMIS)—no later than 5 days after an air marshal has completed a training requirement. FAMS headquarters personnel within the Field Operations Division (Field Operations) generate reports in FAMIS detailing the extent to which air marshals have passed the practical pistol course, participated in physical fitness assessments, and completed their requisite number of recurrent training hours on a quarterly and annual basis. According to Field Operations staff, these personnel contact field office SACs or their designees when these reports identify air marshals that have not met their recurrent requirements. If field office staff report that the air marshal(s) have completed a requirement(s), but have not entered this information in FAMIS, Field Operations is to request appropriate documentation and update FAMIS. Field Operations officials stated they discuss with field offices why any air marshals have not completed their training requirements, such as illnesses, injuries, or scheduling issues, and, if necessary, the field office SAC is to take appropriate action. In addition, FAMS policy allows for air marshals to be exempted from training requirements when certain conditions, such as illness, injury, or military leave, are met and defines the process by which exemptions are to be requested and granted. Specifically, FAMS policy states that SACs must prepare a letter to the appropriate regional director to request approval of the exemption no later than 5 days after the end of a quarter. Field Operations officials reported that a FAMS headquarters staff person records the exemption into FAMIS once a regional director has approved the request. FAMS has processes for field office SACs to monitor which air marshals have completed their required recurrent training each year, as well as those who have received exemptions from such training. However, we found that the data used to track this information were not complete or readily available for purposes of tracking air marshals’ compliance with these requirements when we requested these data in March 2015. We reviewed training data from FAMIS’s training module for calendar year 2014 to determine the extent that air marshals have met their recurrent training requirements. Although we were ultimately able to determine that almost all of the air marshals met their training requirements or received an appropriate exemption in calendar year 2014, it was difficult to do so because data on both approved exemptions and training completions were missing or had not been entered in a timely manner. We found that nearly one-third of all training exemptions granted to air marshals in calendar year 2014 had not been entered into FAMIS. Specifically, at least 299 training exemptions granted to about 2 percent of air marshals had not been entered into FAMIS when we received the data in March 2015—nearly three months after the calendar year had ended. Additionally, we found that nearly one-quarter of all training records for calendar year 2014 had been entered into FAMIS more than 5 days after an air marshal had completed the training. FAMS headquarters officials responsible for reconciling recurrent training service-wide stated that these exemptions were not entered into FAMIS until July 2015—seven months after the calendar year ended. These officials told us that the delay was partly because FAMS took the database offline for three weeks in September 2014 to allow for an upgrade of the system. As a result, the staff person responsible for entering exemptions had become backlogged and later entered the backlogged exemptions into the database, in part, to reconcile the missing exemptions that were identified through our analysis of the 2014 training data. Additionally, FAMS officials responsible for reconciling completion of recurrent training service-wide reported that each quarter there are a significant number of air marshals for whom field office staff have not entered training records. According to these officials, at the end of every quarter, FAMS Field Operations staff must contact staff from several field offices to remind them to review and enter missing training records—a process that officials described as labor-intensive. In December 2015, FAMS officials provided us with the updated records for the air marshals whose exemptions had been entered into FAMIS as a result of our audit work to demonstrate that the air marshals’ 2014 recurrent training data had been corrected and were complete. TSA Office of Inspection (OOI) reports have found similar problems with monitoring, or timely and accurate recording, of air marshals’ training records. Specifically, OOI inspections of FAMS’s field offices completed during 2010 through 2015 found that three field offices had not accurately recorded air marshals’ training data or done so in a timely manner— issues FAMS had not identified through its training monitoring process. FAMS processes for monitoring the extent that air marshals service-wide have completed their recurrent training requirements have not ensured that air marshals’ training data are entered in a timely manner. These processes, as defined in FAMS policy, lack effective controls to ensure accountability. Specifically, FAMS has not specified in policy who has oversight responsibility at the headquarters level for ensuring that each field office has entered recurrent training data in a timely manner. Additionally, FAMS has not specified in policy who has oversight responsibility at the headquarters level for ensuring that headquarters personnel have entered air marshals exemptions into FAMIS within a defined timeframe. Federal Standards for Internal Control states that agencies should ensure that transactions and events are completely and accurately recorded in a timely manner, and are readily available for examination. Federal regulations require that agencies establish policies governing employee training including the assignment of responsibility to ensure the training goals are achieved. In addition, internal control standards state that in a good control environment, areas of authority and responsibility are clearly defined and appropriately documented through its policies and procedures, and appropriate lines of reporting are established. Given the number of training records that we found were incomplete or not entered into FAMIS in a timely manner, as well as the ongoing challenges that FAMS has faced in ensuring accurate and timely input of training and exemptions data as described in the OOI findings, policies that specify who is responsible at the headquarters level for overseeing these activities could help FAMS ensure its data on air marshals’ recurrent training are consistently accurate and up to date. Complete and readily available training and exemptions data would enable FAMS to more effectively determine the extent that air marshals service-wide have met their training requirements and are mission ready. Air marshals must demonstrate their proficiency in marksmanship by taking the practical pistol course on a quarterly basis and achieving a minimum score of 255 out of 300—the highest qualification standard for any federal law enforcement agency, according to FAMS officials. However, for the remainder of air marshals’ required recurrent training courses, FAMS does not assess air marshals against a similarly identified level of proficiency, such as by requiring examinations to evaluate air marshals’ knowledge in classroom-based courses or by using checklists or other objective tools to evaluate air marshals’ performance during simulation-based courses, such as mission tactics. For instance, FAMS’s recurrent training includes both mandatory refresher courses that all air marshals must complete annually as well as a broad set of courses within several disciplines that field offices must ensure are incorporated into their annual or quarterly training plans. However, FAMS does not require air marshals to take an examination for any course within these disciplines. Federal Standards for Internal Control states that agencies should establish expectations of competence for key roles, such as federal air marshals, to help the entity achieve its objectives, and that all personnel need to possess and maintain the relevant knowledge, skills, and abilities that allow them to accomplish their assigned duties. Additionally, GAO’s prior work on training and development states that in some cases, agencies may identify critical skills and competencies that are important to mission success, and require that employees meet requirements to ensure they possess needed knowledge and skills. Further, DHS’s Learning Evaluation Guide identifies testing or skill checklists as tools agencies can use to determine whether students have the knowledge and can perform the skills classes are designed to teach. The guide also states that learning activities that are skill-based, such as FAMS courses on tactical and defensive techniques, may require the development of skill checklists to determine the level of trainee proficiency. Field Operations officials said that it is not necessary to use examinations for recurrent training courses because air marshals are continuously evaluated by field office training instructors and SFAMs who participate in their training. In addition, officials stated that air marshals demonstrate their proficiency in the various cognitive or physical skills they must possess during simulations conducted as part of FAMS’s recurrent training program. As a result, according to officials, FAMS can be assured that any gaps in air marshals’ performance are identified and addressed in a timely manner. Field Operations officials also stated that checklists are also unnecessary because training instructors do not evaluate air marshals’ performance solely on whether their actions were appropriate and the air marshal correctly applied the relevant principles or tactics taught by course simulations. Rather, air marshals must also articulate why their actions were appropriate and how they applied the relevant principles or tactics. For example, to evaluate air marshals’ performance in mission tactics simulations, Field Operations officials stated that training instructors observe air marshals’ actions in response to various simulated threats ranging from verbal or physical assaults to the crew by passengers to suicide bombers. According to FAMS officials, training instructors evaluate the extent the actions taken by air marshals resulted in positive outcomes (i.e., protected the plane, passengers, and crew) and were carried out in accordance with applicable authorities, policies, procedures, and principles. Officials stated that part of this assessment is based on air marshals’ explanation for why their actions appropriately addressed the simulated threat and applied relevant FAMS principles and tactics. In addition, TSA has established a training instructor training program, which, according to FAMS Field Operations officials, ensures that training instructors are highly trained and certified; and, therefore, can assess air marshals’ performance in a reasonably objective manner. As previously discussed, for training courses taught at TSATC, OTD requires air marshal candidates and incumbent air marshals to demonstrate that they possess the knowledge or cognitive skills that classroom courses are intended to impart by passing examinations for training courses taught at TSATC. Additionally, when evaluating the performance of air marshal candidates and incumbent air marshals in courses taught at TSATC, OTD requires training instructors to use evaluation tools, including checklists. For example, TSATC training instructors must use these tools when evaluating air marshal candidates’ performance in defensive measures and mission tactics simulations as part of FAMTP-II. TSATC staff reported that they require TSATC training instructors to use such checklists because doing so better ensures air marshals are evaluated in an objective, fair, and consistent manner. Further, a field office SAC reported that given the absence of an objective tool for assessing air marshals’ performance in field-based training, such as defensive measures and tactics, there are air marshals who have not fully demonstrated the requisite level of proficiency, but still “passed” these courses and continued to fly missions. According to the SAC, air marshals are flying missions with colleagues they do not view as mission ready in part due to their performance in training courses—a concern raised by air marshals in 3 of the 7 field offices we visited. Finally, field office trainers in 3 field offices reported that a standardized tool for evaluating air marshals during training would help them to identify and address trainee deficiencies. FAMS Field Operations officials also noted that standardized examinations or checklists during trainings are not necessary because SFAMs have opportunities to continually assess their air marshals’ mission readiness by flying with their squads or attending training. However, we found that SFAMs infrequently attend training with their squads or accompany them on flying missions, although they are not necessarily required to do so. The July 2014 FAMS Advisory Council minutes state that the council unanimously agreed that a large population of SFAMs do not fully participate in their air marshals’ required training. Air marshals from 6 of 22 field offices raised similar concerns based on our review of the minutes from field office focus groups conducted in fiscal year 2014. In addition, SFAMs in all 7 field offices we visited reported that they rarely fly with their squads, i.e., once per quarter or less. Further, SFAMs in 6 of the 7 field offices stated that they rely on air marshals’ self-assessments and factors unrelated to mission readiness, such as quality of administrative paperwork (i.e., travel vouchers and timecards), and completion of OLC training to assess air marshals’ performance. Standardized methods for determining whether incumbent air marshals are mission ready, such as required examinations or evaluation tools, in key training courses could help provide better assurance that air marshals service-wide are mission ready. Objective and standardized methods of evaluating incumbent air marshals performance would better enable FAMS to assess air marshals’ proficiency in key skills and also more effectively target areas for improvement. In 2015, FAMS developed a new physical fitness program—the Health, Fitness, and Wellness Program—in part to address recent concerns with air marshals’ fitness and injury rates, but it is too early to gauge the program’s impact. Over the period 2008 to 2015, FAMS commissioned two studies to evaluate air marshals’ health and fitness, as well as a third study to evaluate air marshal fatigue and sleeplessness. FitForce, a consulting group, which conducted the first evaluation of air marshals’ fitness in 2009, found that nearly 32 percent of the FAMS who participated in the study exercised less than three times per week and almost 7 percent did not exercise at all. FitForce also concluded that physical fitness is a necessity for air marshals to be able to perform the essential functions of their job, and stated that FAMS should make a commitment to address the fitness needs of air marshals. Additionally, a 2012 sleep study conducted by Harvard University concluded that more than half of the air marshals who responded to the study’s survey were overweight and nearly one-third were obese, and, therefore, may suffer a variety of health issues that could directly impact mission readiness. Furthermore, FAMS conducted its own review of air marshals’ fitness from 2012 through 2013 and concluded that air marshals suffered from high injury rates and declining overall health and wellness, which FAMS officials attributed in part to the increasing age of air marshals. Specifically, the review found that the injuries that occurred while air marshals took their physical fitness assessment from 2010 through 2013 had resulted in approximately 8,060 lost or restricted work days and 12,896 lost mission opportunities and Office of Workers’ Compensation Program claims totaling over $1 million. We analyzed the scores that air marshals achieved in calendar year 2014 when taking the quarterly Mission Readiness Assessment (MRA)—the health evaluation program that FAMS had in place at that time. We found that, with the exception of the 1.5-mile run, the majority of air marshals who took the MRA met or exceeded each of the MRA component test goals. In quarters 2 through 4 of calendar year 2014, 84 to almost 90 percent of air marshals who participated in the MRA failed to meet the 1.5-mile run goal, as shown in figure 3. Moreover, about 5 percent of the air marshals did not meet the performance goals for any of the component tests in quarters 2 through 4 of calendar year 2014. To address the impact that air marshals’ declining health and fitness may pose to FAMS’s ability to carry out its mission within TSA, as well as air marshals’ injury rates, FAMS has developed the Health, Fitness, and Wellness Program, which went into effect in April 2016. According to FAMS policy, this program will include a revised fitness assessment—the Health and Fitness Assessment (HFA)—and a general health and wellness program. FAMS officials reported that air marshals are to complete the HFA on a biannual basis, but will not be required to meet performance goals for any of the HFA’s four components: cardiorespiratory endurance, muscular strength, muscular endurance, and flexibility. Rather, FAMS will use the results of air marshals’ first HFA to establish a fitness baseline and to take appropriate action to improve the performance of those who do not maintain their fitness levels or show improvement. According to FAMS officials, the agency decided not to require air marshals to meet the performance goals for the HFA tests because the results of the HFA cannot reliably determine the extent that an air marshal is physically capable of carrying out FAMS’s mission. Officials explained that FAMS had originally intended to require incumbent air marshals to meet a physical fitness standard similar to the HFA, but did not do so because of concerns raised by TSA’s Office of Human Capital and Office of the Chief Counsel. Specifically, according to FAMS officials, advice was provided regarding whether the proposed physical fitness standard could reliably predict an air marshal’s physical ability to carry out FAMS job-related mission, as well as if FAMS could demonstrate the business necessity (mission-related) of the standard. Because of these concerns, FAMS’s leadership decided to implement the Health, Wellness, and Fitness Program with a focus on reducing the incidence of air marshals’ injuries, reducing the number of exemptions air marshals needed to request from taking the HFA, increasing program participation, and improving air marshals’ overall health and wellness instead. FAMS officials stated that in addition to general improvement of air marshals’ health and fitness, a key benefit of the new program will be that air marshals will request and receive fewer exemptions because the HFA will allow air marshals to demonstrate their fitness through alternative means of testing. FAMS officials reported that, when taking the HFA, air marshals may choose one of three exercises to perform for five of the six subsets within the four components. For example, when taking the upper body subset of the muscular strength component, air marshals may choose to perform pull-ups, assisted pull-ups, or lateral pulldowns. According to FAMS officials, because multiple exercises will be available for each HFA component, FAMS will no longer grant air marshals exemptions from taking the HFA unless an injury prevents them from performing any of the HFA exercises. FAMS has established a goal for the Health, Wellness, and Fitness program—to provide the opportunity, resources, and education necessary to enhance mission readiness and promote workplace wellness, but it is too early to know if the program is achieving its intended goal. FAMS and OTD officials responsible for developing this program told us that FAMS plans to collect and analyze data on air marshals’ performance on the HFA over a period of about 12 to 18 months—two or three assessment periods. These officials stated that after FAMS had collected and analyzed sufficient data and established a baseline, the agency would be better positioned to collaborate with OTD to establish performance measures for the program. In the interim, FAMS plan to monitor data such as injury rates and the results of periodic physical exams. Given the unique operating environment of air marshals, it is vital that TSA ensure that air marshals’ training needs are identified and addressed, and that air marshals are mission ready. TSA does not systematically obtain feedback on the extent to which FAMTP courses meet incumbent air marshals’ training needs because officials state that they collect sufficient information from air marshal candidates on their training programs. However, by regularly collecting incumbent air marshals’ feedback on the recurrent training they receive in the field offices, OTD would better ensure it considers the input and experience of incumbent air marshals when assessing and refining their training programs. Also, by taking additional steps to improve the response rates for the training surveys it administers to air marshal candidates, incumbent air marshals, and their supervisors, OTD could be more reasonably assured that the feedback it receives represents the full spectrum of views held by its air marshal workforce. FAMS has established recurrent training requirements to ensure that air marshals maintain the knowledge, skills and abilities needed to carry out their mission. However, because FAMS processes have not ensured the timely and complete recording of training data—an ongoing challenge for FAMS—FAMS has been hindered in its ability to ensure air marshals’ compliance with training requirements. Specifying in policy who has oversight responsibility at the headquarters level for ensuring that each field office has entered air marshals’ training data in a timely manner and that headquarters personnel have entered air marshals’ exemptions into FAMIS could help FAMS better ensure its data on air marshals’ recurrent training are consistently complete and up to date. Such a policy could also enable FAMS to more effectively determine the extent that air marshals service-wide have met their training requirements and are mission ready. Additionally, by developing and implementing more objective and standardized methods of determining, in the course of their recurrent training, whether incumbent air marshals continue to be mission ready, FAMS could better assess their skills and also more effectively target areas for improvement. To ensure effective evaluation of air marshal training, we recommend that the TSA Administrator direct OTD to take the following two actions: implement a mechanism for regularly collecting and incorporating incumbent air marshals’ feedback on the training they receive from field office programs, and take additional steps to improve the response rates of the training surveys it conducts. To provide reasonable assurance that air marshals are complying with recurrent training requirements and have the capability to carry out FAMS’s mission, we recommend the TSA Administrator direct FAMS to take the following three actions: specify in policy who at the headquarters level has oversight responsibility for ensuring that field office SACs or their designees meet their responsibilities for ensuring that training completion records are entered in a timely manner, specify in policy who at the headquarters level is responsible for ensuring that headquarters personnel enter approved air marshals’ training exemptions into FAMIS, and define the timeframe for doing so, and develop and implement standardized methods, such as examinations and checklists, for determining whether incumbent air marshals continue to be mission ready in key skills. We provided a draft of this report to DHS for comment. In its written comments, reproduced in appendix II, DHS concurred with the five recommendations and described actions under way or planned to address them. DHS also provided technical comments that we incorporated, as appropriate. With regard to the first recommendation to implement a mechanism for regularly collecting and incorporating incumbent air marshals' feedback on the training they receive from field office programs, DHS concurred and stated that TSA has developed a survey to measure the effectiveness of air marshal training curriculum, field office training personnel, and training facilities. DHS also stated that this survey will be added to the TSA On-Line Learning Center where it can be distributed to air marshals and supervisors on a regular basis. According to DHS, TSA implemented the survey in the On-Line Learning Center in July 2016 and, beginning in October 2016, will send the survey to air marshals and supervisors after they complete a course at TSATC. TSA also plans for curriculum development and review committees to use the feedback from these surveys to improve courses offered at TSATC. These actions, if implemented effectively, should address the intent of our recommendation. With regard to the second recommendation to take additional steps to improve the response rates of the training surveys it conducts, DHS concurred and stated that future surveys of FAMTP graduates and their supervisors will be distributed to personnel through the On-Line Learning Center. DHS stated that the capabilities of the On-Line Learning Center will provide a tracking mechanism for program managers to ensure that personnel complete and submit the survey. According to DHS, survey reports will be compiled and sent to TSATC in a manner that maintains the anonymity of the respondent. TSA anticipates that this process will significantly improve response rates. These actions, if implemented effectively, should address the intent of our recommendation. DHS concurred with our third and fourth recommendations that FAMS specify in policy (1) who at the headquarters level has oversight responsibility for ensuring that field office SACs or their designees meet their responsibilities for ensuring that training completion records are entered in a timely manner, and (2) who at the headquarters level is responsible for ensuring that headquarters personnel enter approved air marshals' training exemptions into FAMIS and define the timeframe for doing so. In response to our recommendations, FAMS updated its policy on recurrent training requirements for air marshals to assign Regional Directors, who are based in headquarters, the responsibility for ensuring that field office SACs or their designees adhere to FAMS’s procedures for recording training completion. The updated policy also requires that FAMS’s Field Operations Division, Tactical Support Section, verify that FAMIS entries are made for all training exemptions within five business days of the approval of the exemptions. These actions, if implemented effectively, should address the intent of our recommendations. With regard to the fifth recommendation to develop and implement standardized methods, such as examinations and checklists, for determining whether air marshals continue to be mission ready in key skills, DHS concurred and stated that FAMS and OTD established a joint Integrated Project Team/Development Committee, which met in June 2016 to develop an assessment process that will be used to determine air marshals’ mission readiness. According to DHS, the joint Integrated Project Team/Development Committee consisted of representatives from seven FAMS field offices and FAMS headquarters as well as instructors and instructional design specialists from TSATC. DHS stated that the Integrated Project Team is drafting recommendations and that approved readiness measures will be implemented beginning in fiscal year 2018. This action, if implemented effectively, could address the intent of our recommendation. However, it is not clear to what extent this assessment process will include standardized methods for determining whether incumbent air marshals continue to be mission ready. We will continue to monitor TSA’s efforts. We are sending copies of this report to appropriate congressional committees, the Secretary of Homeland Security, the TSA Administrator, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. This report addresses the following questions: How does the Transportation Security Administration (TSA) assess the training needs of air marshal candidates and incumbent air marshals, and what opportunities exist, if any, to improve this assessment? To what extent does the Federal Air Marshal Service (FAMS) ensure that incumbent air marshals are mission ready? This report is a public version of the prior sensitive report that we provided to you. TSA deemed some of the information in the report as sensitive security information, which must be protected from public disclosure. Therefore, this report omits this information, such as specific numbers of air marshals and specific types of training that air marshals reported needed to be added to FAMS’s training curriculum to address changes in air marshals’ responsibilities. Although the information provided in this report is more limited in scope in that it excludes such information, it addresses the same questions as the sensitive security information report and the methodology used for both reports is the same. To address the first objective, we reviewed TSA directives, guidance, and other relevant documentation describing TSA’s processes for developing and evaluating Federal Air Marshal Training Program (FAMTP) training curriculum to determine how TSA evaluates existing courses and develops new courses within FAMTP and other relevant training programs. We interviewed senior officials responsible for these efforts in TSA’s Office of Training and Development (OTD). We also analyzed documentation on the results of training curriculum assessments OTD conducted to identify recommendations made to improve training and the extent to which OTD implemented the recommendations. OTD conducted these assessments from May 2007 through April 2014. We compared OTD’s training development and evaluation processes to key principles identified in DHS guidance on training evaluation, and GAO’s prior work on training and development, specifically the Guide for Assessing Strategic Training and Development Efforts in the Federal Government. We also reviewed the minutes of the quarterly teleconferences held in fiscal years 2014 through 2015—the most recent time period for which the meeting minutes were available—between Transportation Security Administration Training Center (TSATC) staff, FAMS headquarters staff, and field office training staff to determine the types of issues discussed during these meetings. Additionally, we obtained the available response rates for surveys OTD conducted of FAMTP graduates and their supervisor’s on the effectiveness of FAMTP courses for calendar years 2009 through 2011—the last three full years that FAMS hired air marshals. In addition, we met with OTD officials to discuss the actions that had been taken to improve these response rates, and compared these actions to Office of Management and Budget standards and guidance for conducting surveys. Further, we visited the TSATC in Atlantic City, New Jersey and 7 of FAMS 22 field offices, which we selected, in part, to reflect a range in size (as determined by the number of air marshals assigned to the office) and geographic dispersion. At TSATC, we interviewed TSATC management and training instructors and toured the facility. At the field offices, we interviewed field office management, Supervisory Federal Air Marshals (SFAM), air marshals, and training instructors to obtain their views on the current training curriculum. The results of these interviews cannot be generalized to all field offices, but provide insight into the extent to which TSA is addressing air marshals’ training needs and ensuring their mission readiness. To address the second objective, we assessed FAMS directives that set forth training requirements for incumbent air marshals, and analyzed air marshals training data for calendar year 2014, which is the most recent year for which training data were available, to determine the extent to which air marshals met these requirements. We interviewed senior FAMS officials to understand how FAMS uses this information to ensure that air marshals are mission ready. We compared the results of our analyses to Standards for Internal Control in the Federal Government, the DHS Learning Evaluation Guide, and GAO’s prior work on training and development. We assessed the reliability of the 2014 training data by (1) reviewing documentation on the processes for entering air marshals’ training records into the Federal Air Marshal Information System (FAMIS), (2) performing electronic testing for obvious anomalies and comparing FAMIS data to FAMIS-generated reports on training completion, and (3) interviewing knowledgeable officials about training records and exemptions entered into FAMIS. Although the data FAMS originally provided were not complete or entered in a timely manner, over the course of our audit we identified missing data that FAMS corrected in response to our inquiries. Therefore, we found the data were reliable for the purposes of our report. Additionally, as previously discussed, we interviewed TSATC training instructors and FAMS field office personnel to obtain their perspectives on FAMS methods for ensuring that air marshals are mission ready. We also reviewed the most recent Management Assessment Program inspection report completed by TSA’s Office of Inspections for all 22 field offices to identify the training-related findings. Additionally, we reviewed the FAMS Advisory Council minutes and the field offices’ focus group minutes for fiscal year 2014—the most recent full year of information available at the time of our request—to identify the training related issues that FAMS personnel raised to their leadership. Finally, we reviewed the studies that FAMS conducted or commissioned to inform its development of its physical fitness program and assessment—a component of air marshals training. We interviewed FAMS and OTD officials responsible for developing and implementing FAMS’s health, wellness, and fitness program to determine how TSA plans to measure the effectiveness of the program. We conducted this performance audit from October 2014 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Maria Strudwick (Assistant Director) and Michael C. Lenington (Analyst-in-Charge) managed this assignment. Jonathan Bachman, Claudia Becker, Juli Digate, Michele Fejfar, Imoni Hampton, Eric Hauswirth, Susan Hsu, Thomas Lombardi, and Minette Richardson made key contributions to this report.
FAMS, within TSA, is the federal entity responsible for promoting confidence in the nation's aviation system through deploying air marshals to protect U.S. air carriers, airports, passengers, and crews. GAO was asked to assess FAMS's training program for federal air marshals. This report examines (1) how TSA assesses the training needs of air marshal candidates and incumbent air marshals, and any opportunities that exist to improve this assessment, and (2) the extent to which FAMS ensures that incumbent air marshals are mission ready. GAO analyzed FAMS training data for calendar year 2014, the last year of available data, reviewed TSA, OTD and DHS guidance and policies on FAMS's air marshal training program, interviewed TSA and FAMS headquarters officials, and visited the TSA Training Center and 7 of FAMS 22 field offices selected based on size and geographic dispersion. The Transportation Security Administration's (TSA) Office of Training and Development (OTD) assesses air marshals' training needs using several information sources, but opportunities exist to obtain more feedback from air marshals on whether the training courses they must take met their needs. OTD primarily assesses air marshals' training needs by holding curriculum development and review conferences composed of OTD officials, training instructors, and other subject matter experts. In assessing courses, conference participants use, among other things, the results of surveys that some air marshals complete on the effectiveness of their training. However, while OTD administers these surveys for air marshal candidates and newly graduated air marshals, it does not use them to obtain feedback from incumbent air marshals on the effectiveness of their annual recurrent training courses. Systematically gathering feedback from incumbent air marshals would better position OTD to fully assess whether the training program is meeting air marshals' needs. Additionally, among the training surveys that OTD does currently administer to air marshals, the response rates have been low. For example, among newly hired air marshals and their supervisors from 2009 through 2011—the last three full years in which the Federal Air Marshal Service (FAMS) hired air marshals—the survey response rates ranged from 16 to 38 percent. Until OTD takes steps to achieve sufficient response rates, OTD cannot be reasonably assured that the feedback it receives represents the full spectrum of views held by air marshals. FAMS relies on its annual recurrent training program to ensure incumbent air marshals' mission readiness, but additional actions could strengthen FAMS's ability to do so. First, FAMS does not have complete and timely data on the extent to which air marshals have completed their recurrent training. For example, nearly one-quarter of all training records for calendar year 2014 had not been entered into FAMS's training database within the required time period. Policies that specify who is responsible at the headquarters level for overseeing these activities could help FAMS ensure its data on air marshals' recurrent training are accurate and up to date. Second, FAMS requires air marshals to demonstrate proficiency in marksmanship by achieving a minimum score of 255 out of 300 on the practical pistol course every quarter. However, for the remaining recurrent training courses FAMS does not assess air marshals' knowledge or performance in these courses against a similarly identified level of proficiency, such as by requiring examinations or by using checklists or other objective tools. More objective and standardized methods of determining incumbent air marshals' mission readiness, as called for by the Department of Homeland Security's (DHS) Learning Evaluation Guide, could help FAMS better and more consistently assess air marshals' skills and target areas for improvement. Additionally, in 2015 FAMS developed a health, fitness, and wellness program to improve air marshals' overall health and wellness, but it is too early to gauge the program's effectiveness. This is a public version of a sensitive report that GAO issued in June 2016. Information that TSA deems “Sensitive Security Information” has been removed. GAO recommends that OTD implement a mechanism for regularly collecting incumbent air marshals' feedback on their recurrent training, and take steps to improve the response rates of training surveys it conducts. GAO also recommends that FAMS specify in policy who at the headquarters level has oversight responsibility for ensuring that recurrent training records are entered in a timely manner, and develop and implement standardized methods to determine whether incumbent air marshals continue to be mission ready in key skills. DHS concurred with all of the recommendations.
In February 2008, we reported that FERC had made few substantive changes to either its merger and acquisition review process or its postmerger oversight as a consequence of its new responsibilities and, as a result, does not have a strong basis for ensuring that harmful cross- subsidization does not occur. Specifically: Reviewing mergers and acquisitions. FERC’s merger and acquisition review relies primarily on company disclosures and commitments not to cross-subsidize. FERC-regulated companies that are proposing to merge with or acquire a regulated company must submit a public application for FERC to review and approve. If cross-subsidies already exist or are planned, companies are required to describe how these are in the public interest by, for example identifying how the planned cross-subsidy benefits utility ratepayers and does not harm others. FERC also requires company officials to attest that they will not engage in unapproved cross- subsidies in the future. This information becomes part of a public record that stakeholders or other interested parties, such as state regulators, consumer advocates, or others may review and comment on, and FERC may hold a public hearing on the merger. FERC officials told us that they evaluate the information in the public record for the application and do not collect evidence or conduct separate analyses of a proposed merger. On the basis of this information, FERC officials told us that they determine which, if any, existing or planned cross-subsidies to allow, then include this information in detail in the final merger or acquisition order. Between the time EPAct was enacted in 2005 and July 10, 2007––when FERC provided detailed information to us––FERC had reviewed or was in the process of reviewing 15 mergers, acquisitions, or sales of assets. FERC had approved 12 mergers, although it approved three of these with conditions– –for example, requiring the merging parties to provide further evidence of provisions to protect customers. Of the remaining three applications, one application was withdrawn by the merging parties prior to FERC’s decision and the other two were still pending. Postmerger oversight. FERC’s postmerger oversight relies on its existing enforcement mechanisms—primarily self-reporting and a limited number of compliance audits. FERC indicates that it places great importance on self-reporting because it believes companies can actively police their own behavior through internal and external audits, and that the companies are in the best position to detect and correct both inadvertent and intentional noncompliance. FERC officials told us that they expect companies to become more vigilant in monitoring their behavior because FERC can now levy much larger fines––up to $1 million per day per violation––and that a violating company’s actions in following this self-reporting policy, along with the seriousness of a potential violation, help inform FERC’s decision on the appropriate penalty. Key stakeholders have raised concerns that internal company audits tend to focus on areas of highest risk to the company profits and, as a result, may not focus specifically on affiliate transactions. One company official noted that the threat of large fines may “chill” companies’ willingness to self-report violations. Between the enactment of EPAct––when Congress formally highlighted its concern about cross-subsidization––and our February 2008 report, no companies had self-reported any of these types of violations. To augment self- reporting, FERC plans to conduct a limited number of compliance audits of holding companies each year, although at the time of our February 2008 report, it had not completed any audits to detect whether cross- subsidization is occurring. In 2008, FERC’s plans to audit 3 of the 36 companies it regulates—Exelon Corporation, Allegheny, Inc., and the Southern Company. If this rate continues, it would take FERC 12 years to audit each of these companies once, although FERC officials noted that they plan audits one year at a time and that the number of audits may change in future years. We found that FERC does not use a formal risk-based approach to plan its compliance audits––a factor that financial auditors and other experts told us is an important consideration in allocating audit resources. Instead, FERC officials plan audits based on informal discussions between FERC’s Office of Enforcement, including its Division of Audits, and relevant FERC offices with related expertise. To obtain a more complete picture of risk, FERC could more actively monitor company-specific data––something it currently does not do. In addition, we found that FERC’s postmerger audit reports on affiliate transactions often lack clear information––that they may not always fully reflect key elements such as objectives, scope, methodology, and the specific audit findings, and sometimes lacked key information, such as the type, number, and value of affiliate transactions at the company involved, the percentage of all affiliate transactions tested, and the test results. Without this information, these audit reports are of limited use in assessing the risk that affiliate transactions pose for utility customers, shareholders, bondholders, and other stakeholders. In our February 2008 report, we recommended that the Chairman of the Federal Energy Regulatory Commission (FERC) develop a comprehensive, risk-based approach to planning audits of affiliate transactions to better target FERC’s audit resources to highest priority needs. Specifically, we recommended that FERC monitor the financial condition of utilities, as some state regulators have found useful, by leveraging analyses done by the financial market and developing a standard set of performance indicators. In addition, we recommended that FERC develop a better means of collaborating with state regulators to leverage audit resources states have already applied to enforcement efforts and to capitalize on state regulators’ unique knowledge. We also recommended that FERC develop an audit reporting approach to clearly identify the objectives, scope and methodology, and the specific findings of the audit to improve public confidence in FERC’s enforcement functions and the usefulness of its audit reports. The Chairman strongly disagreed with our overall findings and the need for our recommendations; nonetheless, we maintain that implementing our recommendations would enhance the effectiveness of FERC’s oversight. States utility commissions’ views of their oversight capacities vary, but many states foresee a need for additional resources to respond to changes from EPAct. The survey we conducted for our February 2008 report highlighted the following concerns: Almost all states have merger approval authority, but many states expressed concern about their ability to regulate the resulting companies. All but 3 states (out of 50 responses) have authority to review and either approve or disapprove mergers, but their authorities varied. For example, one state could only disapprove a merger and, as such, allows a merger by taking no action to disapprove it. State regulators reported being mostly concerned about the impact of mergers on customer rates, but 25 of 45 reporting states also noted concerns that the resulting, potentially more complex company could be more difficult to regulate. In recent years, the difficulty of regulating merged companies has been cited by two state commissions––one in Montana and one in Oregon––that denied proposed mergers in their states. For example, a state commission official in Montana told us the commission denied a FERC-approved merger in July 2007 that involved a Montana regulated utility, whose headquarters was in South Dakota, which would have been bought by an Australian holding company. Most states have authorities over affiliate transactions, but many states report auditing few transactions. Nationally, 49 states noted they have some type of affiliate transaction authority, and while some states reported that they require periodic, specialized audits of affiliate transactions, 28 of the 49 reporting states reported auditing 1 percent or fewer over the last five years. Audit authorities vary from prohibitions against certain types of transactions to less restrictive requirements such as allowance of a transaction without prior review, but authority to disallow the transaction at a later time if it was deemed inappropriate. Only 3 states reported that affiliate transactions always needed prior commission approval. One attorney in a state utility commission noted that holding company and affiliate transactions can be very complex and time-consuming to review, and had concerns about having enough resources to do this. Some states report not having access to holding company books and records. Although almost all states report they have access to financial books and records from utilities to review affiliate transactions, many states reported they do not have such direct access to the books and records of holding companies or their affiliated companies. While EPAct provides state regulators the ability to obtain such information, some states expressed concern that this access could require them to be extremely specific in identifying needed information, which may be difficult. Lack of direct access, experts noted, may limit the effectiveness of state commission oversight and result in harmful cross-subsidization because the states cannot link financial risks associated with affiliated companies to their regulated utility customers. All of the 49 states that responded to this survey question noted that they require utilities to provide financial reports, and 8 of these states require reports that also include the holding company or both the holding company and the affiliated companies. States foresee needing additional resources to respond to the changes from EPAct. Specifically, 22 of the 50 states that responded to our survey said that they need additional staffing or funding, or both, to respond to the changes that resulted from EPAct. Further, 6 out of 30 states raised staffing as a key challenge in overseeing utilities since the passage of EPAct, and 8 states have proposed or actually increased staffing. In conclusion, the repeal of PUHCA 1935 opened the door for needed investment in the utility industry; however, it comes at the potential cost of complicating regulation of the industry. Further, the introduction of new types of investors and different corporate combinations––including the ownership of utilities by complex international companies, equity firms, or other investors with different incentives than providing traditional utility company services––could change the utility industry into something quite different than the industry that FERC and the states have overseen for decades. In light of these changes, we believe FERC should err on the side of a “vigilance first” approach to preventing potential cross- subsidization. As FERC and states approve mergers, the responsibility for ensuring that cross-subsidization will not occur shifts to FERC’s Office of Enforcement and state commission staffs. Without a risk-based approach to guide its audit planning––the active portion of its postmerger oversight– –FERC may be missing opportunities to demonstrate its commitment to ensuring that companies are not engaged in cross-subsidization at the expense of consumers and may not be using its audit resources in the most efficient and effective manner. Without reassessing its merger review and postmerger oversight, FERC may approve the formation of companies that are difficult and costly for it and states to oversee and potentially risky for consumers and the broader market. In addition, the lack of clear information in audit reports not only limits their value to stakeholders, but may undermine regulated companies’ efforts to understand the nature of FERC’s oversight concerns and to conduct internal audits to identify potential violations that are consistent with those conducted by FERC— key elements in improving their self-reporting. We continue to encourage the FERC Chairman to consider our recommendations. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Mark Gaffigan at (202) 512-3841 or at [email protected]. Individuals who contributed to this statement include Dan Haas, Randy Jones, Jon Ludwigson, Alison O’Neill, Anthony Padilla, and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Under the Public Utility Holding Company Act of 1935 (PUHCA 1935) and other laws, federal agencies and state commissions have traditionally regulated utilities to protect consumers from supply disruptions and unfair pricing. The Energy Policy Act of 2005 (EPAct) repealed PUHCA 1935, removing some limitations on the companies that could merge with or invest in utilities, and leaving the Federal Energy Regulatory Commission (FERC), which already regulated utilities, with primary federal responsibility for regulating them. Because of the potential for new mergers or acquisitions between utilities and companies previously restricted from investing in utilities, there has been considerable interest in whether cross-subsidization--unfairly passing on to consumers the cost of transactions between utility companies and their "affiliates"--could occur. GAO was asked to testify on its February 2008 report, Utility Oversight: Recent Changes in Law Call for Improved Vigilance by FERC (GAO-08-289), which (1) examined the extent to which FERC changed its merger review and post merger oversight since EPAct to protect against cross-subsidization and (2) surveyed state utility commissions about their oversight. In this report, GAO recommended that FERC adopt a risk-based approach to auditing and improve its audit reports, among other things. The FERC Chairman disagreed with the need for our recommendations, but GAO maintains that implementing them would improve oversight. In its February 2008 report, GAO reported that FERC had made few substantive changes to either its merger review process or its post merger oversight since EPAct and, as a result, does not have a strong basis for ensuring that harmful cross-subsidization does not occur. FERC officials told GAO that they plan to require merging companies to disclose any cross-subsidization and to certify in writing that they will not engage in unapproved cross-subsidization. After mergers have taken place, FERC intends to rely on its existing enforcement mechanisms--primarily companies' self-reporting noncompliance and a limited number of compliance audits--to detect potential cross-subsidization. FERC officials told us that they believe the threat of the large fines allowed under EPAct will encourage companies to investigate and self-report noncompliance. To augment self-reporting, FERC officials told us that, in 2008, they are using an informal plan to reallocate their limited audit staff to audit the affiliate transactions of 3 of the 36 holding companies it regulates. In planning these compliance audits, FERC officials told us that they do not formally consider companies' risk for noncompliance --a factor that financial auditors and other experts told us is an important consideration in allocating audit resources. Rather, they rely on informal discussions between senior FERC managers and staff. Moreover, we found that FERC's audit reporting approach results in audit reports that often lack a clear description of the audit objectives, scope, methodology, and findings--inhibiting their use to stakeholders. GAO's survey of state utility commissions found that states' views varied on their current regulatory capacities to review utility mergers and acquisitions and oversee affiliate transactions; however many states reported a need for additional resources, such as staff and funding, to respond to changes in oversight after the repeal of PUHCA 1935. All but a few states have the authority to approve mergers, but many states expressed concern about their ability to regulate the resulting companies. In recent years, two state commissions denied mergers, in part because of these concerns. Most states also have some type of authority to approve, review, and audit affiliate transactions, but many states review or audit only a small percentage of the transactions; 28 of the 49 states that responded to our survey question about auditing said they audited 1 percent or fewer transactions over the last five years. In addition, although almost all states reported that they had access to financial books and records from utilities to review affiliate transactions, many states reported they do not have such direct access to the books and records of holding companies or their affiliated companies. While EPAct provides state regulators the ability to obtain such information, some states expressed concern that this access could require them to be extremely specific in identifying needed information, thus potentially limiting their audit access. Finally, 22 of the 50 states that responded to our survey question about resources said that they need additional staffing or funding, or both, to respond to changes that resulted from EPAct, and 8 states have proposed or actually increased staffing since EPAct was enacted.
The Single Audit Act is intended to, among other things, promote sound financial management, including effective internal controls, with respect to federal awards administered by nonfederal entities; promote the efficient and effective use of audit resources; and ensure that federal departments and agencies, to the maximum extent practicable, rely upon and use single audit work. The Single Audit Act requires state and local governments and nonprofit organizations that expend $300,000 or more in federal awards in a fiscal year to have either a single audit or program-specific audit conducted. Federal awards include grants, loans, loan guarantees, property, cooperative agreements, interest subsidies, insurance, food commodities, and direct appropriations and federal cost reimbursement contracts. The Single Audit Act also requires recipients to forward an audit reporting package to the FAC for archival purposes and for distribution to each federal agency responsible for programs for which the audit report identifies a finding. The reporting package includes (1) the recipient’s financial statements and schedule of expenditures of federal awards, (2) a summary schedule of prior audit findings, including the status of all audit findings included in the prior audit’s schedule of findings and questioned costs for federal awards, (3) the auditor’s opinion on the recipient’s financial statements and schedule of expenditures of federal awards, reports on internal control and compliance with laws, regulations, and provisions of contracts or grant agreements, and (4) a schedule of findings and questioned costs. Single audits are a key control for the oversight and monitoring of recipient use of federal awards. Federal agency actions to ensure that award recipients address audit findings contained in single audit reports are a critical element in the federal government’s ability to efficiently and effectively administer federal awards. These findings can include internal control weaknesses; material noncompliance with the provisions of laws, regulations, or grant agreements; and fraud affecting a federal award. The President’s Management Agenda, Fiscal Year 2002, identifies the need to reduce improper payments as a significant element of the Administration’s initiative to improve financial performance throughout the government. Single audits can have an impact on the government’s efforts to address improper payments since many of the programs experiencing improper payments are audited as part of the over 30,000 single audits conducted annually. For example, recent estimates by the departments of Agriculture and HUD identified about $976 million and $2 billion in improper payments in food stamps and housing subsidy programs, respectively. These programs are often audited as part of a single audit. Our objectives were to determine what program managers for six large programs, two each at Education, HUD, and Transportation, do to (1) ensure that federal award recipients correct the current year and recurring findings identified in single audit reports and (2) summarize and communicate single audit results and actions taken to correct audit findings to agency management for its use in evaluating agency oversight and monitoring of recipient performance and in identifying programwide and recipient-specific problem areas needing management attention. We selected these agencies because they are three of the four federal agencies that provide the largest amount of federal awards to state and local governments and nonprofit organizations. OMB documents show that, in fiscal year 2001, these agencies made grants totaling $84 billion to state and local governments. We did not include the Department of Health and Human Services, the agency with the largest amount of federal awards, in this review because of our current work to evaluate the Centers for Medicare and Medicaid Services’ (CMS) efforts to monitor its financial oversight to help ensure the propriety of Medicaid expenditures. That work included the review of single audit reports for fiscal year 1999 and found that the correction of audit findings and monitoring of CMS and its regional offices were limited and audit resolution activities were inconsistently performed across regions. To assess how program managers ensure that federal award recipients corrected problems discussed in single audit reports, we reviewed the Single Audit Act, OMB Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations, and the Comptroller General’s Standards for Internal Control in the Federal Government to identify agency responsibilities for correcting single audit findings. This review identified the following three areas of responsibility, which represent the criteria we used in making our assessment. Obtain single audit reports and distribute them to agency officials responsible for reviewing the report findings and taking actions on those findings. Issue written management decisions on audit findings within 6 months of the receipt of the audit report to notify recipients of actions the federal agency considers necessary to correct the audit findings. Follow up with award recipients to ensure that corrective actions occurred. The FAC single audit database was established as a result of the Single Audit Act Amendments of 1996 and contains summary information on the auditor, the recipient and its federal programs, and the audit results. We did not independently test the reliability of the database. However, at OMB’s request, the Office of the Inspector General (OIG) at the Department of Commerce reviewed the database to assist OMB, the Census Bureau, and other users in assessing the accuracy of the fiscal year 1998 audit report information in the database. We reviewed the OIG’s sampling methodology, monitored the audit scope and the progress of the review, and discussed the results with OMB and OIG officials. We concluded that the database for calendar year 1999 was reliable and adequate for our sampling purposes. At each agency, we identified two large award programs and queried the FAC single audit database for calendar year 1999 single audit reports to determine the 10 grantees receiving the largest amount of funding for each program. The programs identified for review were Title I and Pell Grants at Education, the Community Development Block Grant (CDBG) entitlement and Section 8 Tenant-Based (Section 8) programs at HUD, and Capital Investment and Formula Grants (CIFG) and Highway Planning and Construction Grants (HPCG) at Transportation. For each grantee identified above, we queried the FAC single audit database to identify audit findings in the programs selected. The query identified 246 audit findings. We interviewed agency officials and reviewed agency guidance to determine their procedures for ensuring that audit findings are communicated to appropriate officials and/or offices for action and assessment of recipients’ corrective actions. We also provided each agency with a list of the audit reports and findings selected for review. For each finding, we requested documentation including written management decisions and evidence of agency follow up with recipients on the corrective actions taken and the appropriateness of those actions. We interviewed agency officials and reviewed the management decisions and documentation provided on agency follow up on recipient corrective actions. Reporting single audit results and recipient actions to correct audit findings to agency management provides management with valuable information for use in assessing program risks and identifying areas needing action. To determine whether and how the three selected agencies summarized and communicated the single audit results and actions to correct audit findings to agency management, we interviewed officials at the three agencies to determine the reports generated to inform agency officials of the single audit results. We conducted our review from October 2001 through March 2002 in accordance with generally accepted government auditing standards. Education, HUD, and Transportation had procedures in place that established responsibility for obtaining, distributing, and reviewing single audit findings and for communicating that information to appropriate officials for action. However, although required by OMB Circular A-133, agencies often did not issue written management decisions or have documentary evidence of their evaluations of and conclusions on recipient actions to correct the audit findings. If federal agencies are going to take action on single audit findings, they must first obtain the single audit reports or other documentation containing single audit findings relating to their programs and distribute this information to appropriate offices for action. The three agencies in our review had procedures for obtaining single audit reports and for distributing audit finding information to appropriate agency offices. At Education, different offices receive and distribute single audit reports or audit findings. For example, the Office of the Chief Financial Officer’s (OCFO) audit resolution coordinator receives copies of single audit reports with Title I program findings from the FAC and distributes audit finding information to the Office of Elementary and Secondary Education (OESE). OESE is responsible for the overall administration of the Title I program and resolving the audit findings and following up on corrective actions. The Office of Student Financial Assistance (OSFA) administers the Pell Grant program. OSFA receives copies of the single audit reports with Pell Grant findings directly from the FAC and distributes copies of the reports to the appropriate Pell Grant program offices for action. Although the FAC provides copies of single audit reports containing HUD program audit findings to the OCFO, officials responsible for the CDBG and Section 8 programs stated that their offices did not use those reports to identify single audit reports with findings. Rather, they obtained copies of single audit reports and/or audit finding information relating to their programs from other sources, including the award recipients and a HUD database developed by its Real Estate Assessment Center (REAC). OCFO officials noted that the office considers these procedures more efficient than having OCFO personnel review each single audit report, identify audit findings, and distribute those findings to the appropriate HUD offices. To identify CDBG findings, the Office of Community Planning and Development (CPD) tasks its 42 field offices with identifying award recipients whose single audit reports contained CDBG-related audit findings. An August 2000 CPD memorandum instructed field offices to query the FAC single audit database to identify single audit reports containing audit findings related to the CDBG program and to obtain copies of these single audit reports directly from the federal award recipients. To perform this task, the field offices use award documents and other agency reports to identify award recipients for which they have oversight responsibility. They then query the FAC single audit database to identify those recipients whose single audit reports contain findings and obtain copies of those reports directly from the recipients. For the Office of Public and Indian Housing (PIH), program managers located in 43 PIH field offices generally obtain audit finding information for the Section 8 program from HUD’s REAC database. Recipients electronically submit financial and compliance information, which is excerpted from single audit reports, directly into the REAC system for REAC analysis. According to HUD officials, this database contains information including the financial statements, notes to the financial statements, the schedule of expenditures of federal awards, the type of audit opinion, an identification of audit findings, and recipient corrective action plans. Findings that are in noncompliance with HUD regulations and agreements are referred to the HUD Departmental Enforcement Center for processing and follow up. PIH officials stated that the REAC database covers about 75 percent of the Section 8 program recipients and that program managers responsible for overseeing federal award recipients not covered by REAC could query the FAC single audit database to identify other single audit reports with findings. Once program managers identify reports with findings, they obtain copies of the single audit reports directly from the recipients. PIH is in the process of developing single audit guidance that it plans to issue during the summer of 2002. The OIG receives single audit reports from the FAC and decides which single audit findings should be formally addressed by the Operating Administrations based on a number of factors. These factors include the dollar amount of expenditures, the number of federal award findings identified by the auditor, and the type of finding identified. Based on its decisions, the OIG sends “action” memoranda to the program field offices informing them of the single audit findings that require action and a response to the OIG. The OIG uses “informational” memoranda to inform the program field offices of single audit findings for which the OIG does not require a formal response. Although the OIG does not require a formal response on the “informational” memoranda, agency officials stated that they expect the field offices to ensure that recipients correct all findings, irrespective of the type of memorandum used to communicate the findings. An agency official noted that they developed this method of addressing audit report findings because they consider many findings insignificant and follow up by OIG officials is not an effective use of resources. Our review found that the audit files at the three agencies contained written management decisions for 75 (about 30 percent) of the 246 findings. OMB Circular A-133 requires federal agencies to issue written management decisions on the audit findings contained in single audit reports within 6 months of receiving the recipient’s single audit report. The management decisions should describe the corrective actions agencies consider necessary based on their evaluation of the audit findings and corrective action plans contained in the single audit reporting package. Since federal agencies are responsible for ensuring that the recipients implement adequate corrective action, it is important for management to clearly communicate the agency’s expectations and time frames for action through management decisions. The issuance of a management decision is also critical because, based on OMB Circular A-133, award recipients may consider an audit finding invalid and not warranting further action if all the following have occurred: a management decision was not issued, 2 years have passed since the audit report in which the finding occurred was submitted to the FAC, and the federal agency or pass-through entity is not currently following up with the recipient on the audit finding. As shown in table 1, the audit files reviewed contained documentation evidencing management decisions for 75 of the 246 audit findings contained in our sample audit reports. Agency officials noted several possible reasons that management decisions were not prepared and available for our review including the findings were insignificant and did not require further action, follow up with recipients was performed but not documented, the audit report that identified the finding also indicated that the recipient had corrected the audit finding as of the report issuance date, and subsequent audit reports were reviewed to determine if the finding had been corrected. The audit files generally did not contain an indication that agency officials considered any of these four factors or used them as a justification for not preparing the required management decisions. Since it is the federal agency’s responsibility to ensure that corrective action implemented by the recipient will correct a finding, the agency should be on record as agreeing with the recipient’s planned or completed corrective actions or pointing out other actions needed to correct the findings. In our view, none of the reasons cited justify the nonissuance of a management decision. For example, by including a finding in a single audit report, auditors are indicating that the finding is significant since government auditing standards require auditors to report all significant findings in the report. The standards identify other means of communicating insignificant findings. Regarding the use of subsequent-year single audit reports to justify the nonissuance of a management decision, it should be noted that single audit reports must be issued no later than 9 months after the recipient’s year-end. By waiting for the subsequent year’s audit report, as many as 21 months could have expired from the end of the audit period for which the finding was initially reported to the receipt of the subsequent year’s audit report. In our opinion, waiting for the subsequent audit report would not result in a timely notification to the recipient of the agency’s position on an audit finding and the recipient’s planned, in progress, or completed corrective actions. The following section provides more detailed information on the results of our review of management decisions. Our review of the audit files for the 113 Title I and Pell Grant audit findings at Education revealed that 66 of the findings had documented management decisions. Of the 47 with no written management decisions, 25 were in the Cooperative Audit Resolution and Oversight Initiative (CAROI) process, which is discussed in more detail below. When either the OESE or Pell Grant program offices receive single audit findings, special teams assess the seriousness of the audit findings to determine the amount of attention needed for resolution. According to draft Education guidance, Post Audit User Guide, which has been in effect since 1987 and has been periodically updated, the purpose of this assessment process is to promote the most efficient use of external audits to assist management in achieving program goals and discharging its fiduciary responsibilities. The teams evaluate the audit findings based on criteria established in the draft guidance, which states that audit findings may be addressed using three approaches -- full resolution, abbreviated resolution, or technical assistance. The principle criteria used in evaluating each finding and determining the resolution approach is the seriousness of the finding, that is, the monetary or program compliance issues identified or the recurring nature of the finding. Full and abbreviated resolution approaches require written notification to the recipient. The guidance states that resolution by technical assistance does not require a written management decision. However, it does require that all communication with the auditee in the resolution of an audit finding using the technical assistance approach be documented and available in the audit file. Education has also developed a process to facilitate management decisions on complex audit issues affecting multiple programs. This process, CAROI, uses a collaborative approach to resolve audit findings and their underlying causes. During the CAROI process, representatives from Education’s program and OIG offices work collaboratively with state and local program managers to address complex audit findings affecting multiple programs. An agency official noted that the process may not be completed in the 6- month management decision time frame set forth in OMB Circular A-133. For those findings in our sample being addressed using the CAROI process, the 6-month requirement was not met. It should be noted, however, that an Education report stated that CAROI projects have had a positive impact in reducing recurring findings identified in statewide audits. Of the 25 single audit findings with no management decisions that are in the CAROI process, 22 relate to one recipient. Education officials told us that they are working with other federal agencies, including the Department of Justice, on fraud and other program-related issues involving this recipient. Regarding the remaining 22 findings with no written management decisions, officials stated that, depending upon the approach the review team determined appropriate for the audit finding, program staff may have followed up with recipients but not prepared a management decision. They noted that, although no record of these discussions was in the audit files, this could have been the case for at least some of the 22 audit findings. OMB Circular A-133 requires that management decisions clearly state whether or not the federal agency sustains the audit finding, the reasons for the decision, the expected corrective action, and that they describe any appeal process available to the recipient. Further, the Circular requires that, if the recipient has not completed corrective action as of the management decision date, the decision should give a timetable for this action. For the 66 findings with written management decisions, our review showed that the management decisions often did not contain all of the elements required by OMB Circular A-133. For example, 5 of the Title I management decisions and 25 Pell Grant management decisions did not include a timetable for follow up on the implementation of corrective action. Further, 3 of the Pell Grant findings did not include expected action to correct the findings. HUD files contained only five written management decisions for the 85 CDBG and Section 8 program audit findings we reviewed. The audit files contained three written management decisions for the 37 CDBG audit findings. Fifteen of these findings were first-time findings and 22 were recurring findings. Of the recurring findings, 16 related to one recipient. Further, only two of the 48 Section 8 findings had written management decisions. Of these findings, 16 were first-time findings and 32 were recurring findings. Eighteen of the recurring findings were for one recipient. This recipient has been identified as having multiple internal control issues related to HUD and other federal agencies that require coordination with the OIG and other federal agencies. HUD officials stated that they were continuing to work with the recipient to resolve these issues. Officials from both the Offices of Community Planning and Development and Public and Indian Housing noted that one possible reason for the lack of a written management decision was that program personnel reviewed the subsequent year’s single audit reports and determined that no further action was necessary based on the status of corrective actions as cited in the report. Our review of the calendar year 2000 audit reports indicated that 27 (13 CDBG and 14 Section 8) of the 85 findings in our sample had been corrected. Therefore, this possibility did not account for most of the instances of missing management decisions. Further, as noted earlier, agencies generally receive subsequent single audit reports well after the 6- month time frame within which management decisions are required. So, at a minimum, the agency did not comply with OMB Circular A-133 timing requirements for the issuance of management decisions. Like Education, HUD’s management decisions did not include all OMB Circular A-133-required information. For example, two of the three CDBG program management decisions did not include a timetable for follow up. In response to our work, HUD’s Office of Community Planning and Development issued Field Guidance on Single Audit Act Requirements (CPD Field Guidance) on March 13, 2002. This guidance contains requirements outlined in OMB Circular A-133, including the requirements that management decisions clearly state whether or not the audit finding is sustained, the reasons for the decision, and the expected grantee action. If the recipient has not completed corrective action, the guidance requires that the field offices establish a timetable for follow up. Finally, the guidance requires that management decisions describe the appeal process available to the recipient. In issuing this guidance, CPD referred to our review as showing that more detailed guidance was needed to help ensure that CPD properly carries out its oversight responsibilities. This guidance is a positive step toward ensuring that management decisions are issued for all audit findings related to the CDBG program. According to PIH officials, they plan to issue guidance covering the process for correcting audit findings contained in single audit reports in the summer of 2002. Transportation files contained only four written management decisions for the 48 CIFG and HPCG audit findings we reviewed, all of which related to the 17 CIFG findings. Transportation guidance requires each Operating Administration to establish a system to ensure prompt responses to audit reports and implementation of audit recommendations. The guidance requires that the system provide for a complete record of actions taken on audit recommendations. Transportation assigns program managers in field offices the responsibility for preparing management decisions and following up on corrective action for those findings addressed in OIG “action” memoranda. Despite this guidance, we found few written management decisions in the audit files reviewed. The OIG issued “action” memoranda for 4 of the 17 CIFG findings. Management decisions existed for 2 of these findings, 1 of which involved questioned costs of over $300,000 and for which the single audit report noted that corrective actions had been completed. The other 2 management decisions involved audit findings for which the OIG had issued “informational” memoranda. Further, of the 31 HPCG findings, the OIG issued two “action” memoranda that addressed 10 findings. The audit files did not contain written management decisions for these findings. Our review of the management decisions to determine if they contained all OMB Circular A-133-required elements revealed that none of the four Transportation management decisions did so. For example, they did not contain information on the reason for the decision to sustain or not sustain the audit finding or a description of the appeals process. Program officials at the three agencies told us that they follow up on the implementation of corrective actions through site visits, telephone conversations, and review of subsequent single audit reports. Although the audit files contained some information relating to corrective actions, we found very little documentation identifying program or field office evaluations of and conclusions on the adequacy of the corrective actions taken by recipients. OMB Circular A-133 requires agencies to provide the recipient with a timetable for implementing corrective action and to ensure that the award recipient takes appropriate and timely corrective actions. The Comptroller General’s Standards for Internal Control in the Federal Government states that agency efforts to monitor internal controls should include policies and procedures for ensuring that the findings of audits and other reviews are promptly resolved. The lack of documentation makes it difficult for management to ensure that program offices and award recipients are meeting their audit finding-related responsibilities in an appropriate and timely manner. Our review of the audit files for the 47 Title I and 66 Pell Grant audit findings showed that 5 Title I and 25 Pell Grant files did not contain documentation of follow-up actions. Education’s program managers responsible for the Title I and Pell Grant programs stated that they verify that corrective action was implemented using site visits and subsequent single audit reports. Education’s draft guidance requires program officials to maintain accurate records of all audit follow-up activities, including all correspondence, documentation, and analysis of the documentation. Based on our audit file review, we were unable to verify that the agency had evaluated and concluded on the adequacy of the recipient’s corrective actions. Our review of 85 single audit findings for the CDBG and Section 8 programs identified documentation of follow up for 28 of the findings. For example, the audit files contained evidence of a review of subsequent single audit reports for 14 findings and of follow up with the recipient and determination that the audit finding was resolved for 4 findings. CPD and PIH officials advised us that program managers located in field offices are tasked with following up with recipients on audit findings contained in single audit reports. They told us that these offices used various procedures, including contacting the federal award recipients concerning the audit findings and corrective actions and reviewing the status of the audit findings in the subsequent single audit reports, to determine if the audit findings were corrected. If considered appropriate, field offices might also conduct on-site monitoring visits at the award recipients. While field office staff may have actively followed up on findings, our review of audit files provided by field office locations showed evidence of follow up or monitoring for only 28 of the 85 findings. The March 2002 CPD Field Guidance requires each field office to maintain files that contain all audit-related communications with the CPD award recipients, including any appropriate reports from the FAC, audit reports, and, if applicable, the auditor’s management letter. As noted above, PIH officials stated that they also plan to issue guidance in the summer of 2002 covering the process for correcting audit findings contained in single audits. Our review of documentation provided by Transportation for the 17 CIFG and 31 HPCG audit findings revealed little evidence of follow-up activity in the audit files. Although these files contained some information relating to corrective actions, they generally did not contain documentation identifying agency evaluations of and conclusions on the adequacy of the corrective actions taken by recipients. Without documentation that corrective action is appropriate, timely, and implemented, management cannot be sure that program offices and award recipients are meeting their audit finding-related responsibilities. During discussions with field office program managers, we determined that follow-up activities vary widely. For example, personnel in one field office told us that the office follows up with the recipient to ensure that corrective action has been implemented and that follow up is tracked and documented using an automated system. Other field office program managers told us that they review the subsequent year’s single audit report to determine if the deficiency has been corrected and may verify that corrective action has been implemented during site visits to the recipient. However, the audit files reviewed did not contain evidence of agency evaluations of or conclusions on the adequacy of recipient actions to correct audit findings. Audit follow-up guidance issued by the Office of the Secretary in 1989 requires each Operating Administration to establish a system to ensure prompt responses to audit reports and the implementation of audit recommendations and further states that the system must be capable of reporting in a timely and uniform manner in order to meet information and reporting requirements. Transportation’s current guidance, which it issued in March 2000, makes no mention of several OMB requirements included in earlier agency guidance, including the contents of management decisions, timely responses to audit reports and follow up procedures, and maintaining records of follow-up actions. Based on discussions with officials at the three agencies, none of the program offices with management decision preparation and corrective action responsibilities reported single audit results or recipient actions to correct single audit findings to agency management. Although neither the Single Audit Act nor OMB Circular A-133 requires this reporting, the Comptroller General’s Standards for Internal Control in the Federal Government note that agency officials, program managers, and others responsible for managing and controlling program operations need relevant, reliable, and timely information to make operating decisions, monitor performance, and allocate resources. Discussions with officials at each of the three agencies revealed that, even when program or other offices have information on single audit results and recipient actions to correct single audit findings, this information is not communicated to agency management for review, analysis, and possible action. Although officials at Education’s OCFO told us that their audit resolution tracking system was capable of reporting on the status of single audit findings, no reporting to Education management occurred. According to an OCFO official at HUD, the various program offices within HUD do not prepare reports on the status of audit findings contained in single audit reports. At Transportation, the OIG reports unresolved and incompletely corrected single audit findings in its semi-annual report to the Congress. However, the report does not include information on all single audit findings, since the OIG only tracks findings for which it issues “action” memoranda, and the report contains only general information and no specific details on the nature and extent of single audit findings. Information for such management reporting can come from many sources including agency analyses of single audit findings and agency databases, such as HUD’s REAC database. Another valuable source of information is the FAC single audit database. This database consists of information obtained from a data collection form that recipients send to the FAC as part of their single audit reporting package. It contains summary information on the auditor, the recipient and its federal programs, and audit results. The database contains about 4 years of information on over 30,000 annual single audit reports. The various data query options available provide potential users, including program managers, auditors, and other interested parties, with significant amounts of readily available information on grant recipient financial management and internal control systems and on compliance with federal laws and regulations. To determine the types and frequency of audit findings at the six programs in the three agencies included in our review, we queried the FAC single audit database and reviewed the sample single audit reports to determine if the grantees in our selection had similar types of audit findings. Our query showed that similar audit findings were reported for grantees in each of the programs. For example, 33 of the 66 audit findings we reviewed for the Pell Grant program were attributable to grantees’ noncompliance with special tests and provisions applicable to the program. These findings typically involved situations where colleges or universities were unable to provide documentation to show that students receiving federal aid attended class. For Title I programs, 11 of the 47 audit findings reviewed were attributable to grantees’ noncompliance with allowable costs provisions specified in the grant. Further, our query showed 16 of the 37 audit findings for recipients of HUD’s CDBG program were attributable to noncompliance with the grants reporting requirements, and 18 of the 48 HUD Section 8 program audit findings were attributable to grantees’ noncompliance with the special test and provisions requirements of the Section 8 grants. We also queried the FAC single audit database to determine if any of the programs selected for review had recurring types of audit findings. We found several instances in which single audit reports contained types of audit findings that were repeated in 3 or more consecutive years. For example, 4 of the 10 Education Pell Grant recipient reports identified eligibility findings that repeated in 3 or more consecutive years. For the Title I program, 4 grantees had subrecipient monitoring findings that repeated in 3 or more consecutive years. At HUD, a review of the database and single audit reports showed that 15 of the 37 CDBG audit findings were not corrected over a period of 3 successive years. Twelve of these 15 recurring audit findings occurred at one recipient. The remaining 3 recurring audit findings occurred at three other recipients. In addition, CDBG and Section 8 grants also had recipients with audit findings attributable to reporting, allowable costs, and eligibility that were repeated in 3 or more subsequent years. Transportation recipients selected for review had cash management, subrecipient monitoring, allowable costs, and equipment and real property management findings that were repeated in 3 or more years at individual recipients. This type of information could be a valuable tool in improving grants management by helping management evaluate agency oversight and monitoring activities and identify problem areas. It could also assist in setting priorities for actions needed to correct program problems. In addition, it can provide agencies with information needed to help them accomplish their responsibilities as established by The President’s Management Agenda, Fiscal Year 2002, initiative to reduce improper payments in federal programs. The first step in an agency’s efforts to address single audit findings is obtaining single audit reports and distributing them to agency officials responsible for reviewing the report findings and taking actions on those findings. Each of the three agencies in our review had procedures for communicating audit reports and/or audit finding information to program or field offices for action. OMB Circular A-133 requires agencies to prepare written management decisions on audit findings contained in single audit reports. Our review of the audit files at the three agencies found that they issued written management decisions for only 75 of the 246 audit findings contained in the single audit reports included in our review. The agencies noted several reasons for not preparing written management decisions, including (1) the audit findings were considered insignificant or not serious, (2) follow up with recipients was performed but not documented, (3) the single audit report stated that the recipient had corrected the finding prior to the report’s issuance, and (4) the subsequent year’s single audit report indicated that the recipient had corrected the finding. In our view, none of these reasons justify the nonissuance of a management decision. Further, the audit files generally did not contain any evidence that agency officials considered these factors or otherwise considered the preparation of management decisions. Education, HUD, and Transportation do not adequately document their evaluations of and conclusions on the corrective actions taken by recipients to correct single audit findings. While the audit files contained copies of recipient documents and other records, they generally did not contain agency evaluations of or conclusions on the adequacy of the recipient actions cited in those records. This documentation is critical because each agency relies heavily on program, regional, or field offices to ensure that corrective actions occur and none requires reporting on the corrective action status on all findings contained in single audit reports. Therefore, requiring documentation can help ensure that these offices perform their responsibilities in ensuring that recipients take all necessary corrective actions. Through discussions with agency officials, we determined that none of the agencies report single audit results or the status of single audit findings and implementation of action to correct deficiencies to agency management. This reporting can strengthen accountability and oversight by providing management with information useful in the analyses of both programwide problems and recurring problems at specific recipients. Further, because many federal programs that are subject to single audits also experience improper payments, this reporting can be useful to agency management in addressing the requirements established in The President’s Management Agenda, Fiscal Year 2002, for reducing such payments. To ensure that recipients correct the weaknesses identified in single audit reports, we recommend that the Secretary for the departments of Education, HUD, and Transportation ensure that each has established and follows guidance that addresses the OMB Circular A-133 requirements for all agency programs whose awards are subject to the Single Audit Act. This guidance should clearly define the roles and responsibilities of each agency unit in ensuring appropriate and timely actions on single audit findings including: preparing and issuing management decisions that clearly communicate the results of agency analyses of single audit findings and the adequacy of corrective actions implemented or planned by the recipient, performing follow up procedures to ensure that the recipient implemented adequate corrective action, and documenting results of evaluations of and conclusions on recipients’ actions to correct audit findings. We also recommend that the Secretary of each of the three agencies implement policies and procedures for reporting information to agency management on (1) the types and causes of findings identified in single audit reports and (2) the status of corrective actions. Education agreed with the thrust of the report’s findings and recommendations. Its comments (reprinted in appendix I) noted that it is important to ensure that recipients correct the weaknesses identified in single audit reports and that the department takes the necessary steps to ensure the implementation of single audit guidance as required under OMB Circular A-133. An attachment to the comments, which is not reprinted, provided several clarification points and suggested additions to the report, which we considered and included in the report, as appropriate. HUD agreed with the report’s findings and recommendations. Its comments (reprinted in appendix II) described actions that HUD is taking to improve its oversight and use of single audits to strengthen its program compliance and performance. They also contained several minor technical or editorial revisions that we considered and included in the report, as appropriate. The Department of Transportation’s comments (reprinted in appendix III) raised several issues about the scope of our audit work, the conclusions reached, and the recommendations made. Transportation questioned our audit scope and suggested that we should have conducted independent field testing to determine the extent and effectiveness of recipient actions taken. Our objective was to determine what agencies do to ensure that recipients take timely and appropriate action; not to independently reperform the steps that program and other offices, as applicable, would need to do to evaluate recipient corrective actions. To accomplish our objective, we examined agency audit files to determine the extent to which those files contained evidence of agency actions to ensure that recipients had taken appropriate and timely corrective actions on all audit findings. As we noted in the report, the audit files generally did not contain evidence of these actions either through written management decisions or through documentary evidence of agency evaluations of or conclusions on recipient corrective actions. Transportation noted that not all single audit findings are useful or meaningful and that our report did not recognize this point. Two points are relevant here. First, the Comptroller General’s Government Auditing Standards, which auditors performing single audits are required to follow, requires auditors to “report the significant audit findings developed in response to each audit objective.” The standards also note that audit findings not reported, because of their insignificance, should be separately communicated to the auditee. The auditors included all of the findings discussed in our report in their single audit reports. Our objective was not to evaluate the usefulness of audit findings or whether auditors made the right determination as to significance in including the 48 findings in the reports covered by our review. Regarding the second point, the “Management Decisions – Transportation” section of the report that we provided for comment identified management decision information for findings addressed by both “action” and “informational” memoranda. In using these different memoranda, Transportation distinguished between serious or significant and other types of findings. We did not separately judge or evaluate the decision to use one or the other type of memoranda. However, OMB Circular A-133 does not distinguish between the serious or other types of audit findings. It clearly states that actions are required on all audit findings. Transportation’s comments also discussed the agency’s process for reviewing and tracking audit findings. They noted that the agency tracks significant findings in the joint OIG/management tracking system until documentation is provided that action has been completed and stated that less significant actions are tracked locally. The draft report we provided for comment generally contained this same information and discussed the types of memoranda the agency used to communicate single audit findings to appropriate agency offices. We do not take issue with the agency’s process. However, as our report states, our review of audit files, irregardless of whether the agency handled the audit findings with “action” or “informational” memoranda, found that the files generally did not contain documentation identifying agency evaluations of and conclusions on the adequacy of the corrective actions taken by recipients. Transportation stated that our report recommends that each department create a new system or systems for communicating audit findings to top management. While our report recommends reporting to top management, it also notes that information for such reporting can come from such sources as agency analyses of single audit findings, agency databases, and the FAC single audit database. We do not recommend or suggest that agencies develop new systems. Agencies such as Transportation, that uses a joint OIG/management tracking system for significant findings, should use information obtained from that or other existing systems to summarize single audit results and the status of recipient corrective actions and communicate that information to top management. The final agency comment notes that documentation concerns alone are insufficient to demonstrate that agencies are ineffective at ensuring that grantees achieve the changes recommended by single audits to safeguard federal funds. Our report does not make the point that agencies are ineffective if they do not maintain appropriate documentation. Clearly, recipients can take timely and appropriate action to correct audit findings without regard to federal agency documentation that such action occurred. However, absent documentation that timely and appropriate actions occurred, agency management would have no basis for concluding that agency follow-up with recipients occurred or that recipient corrective actions, if any, were timely and appropriate. We are sending copies of this report to the Ranking Minority Member, Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, House Committee on Government Reform; the Chairman and Ranking Minority Member, Senate Committee on Appropriations; the Chairman and Ranking Minority Member, House Committee on Appropriations; the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; the Chairman and Ranking Minority Member, House Committee on Government Reform; the Chairman and Ranking Minority Member, Senate Budget Committee; and the Chairman and Ranking Minority Member, House Budget Committee. We are also sending copies to the Director of the Office of Management and Budget and agency CFOs and IGs. Copies of this report will be made available to others upon request. This report will also be available on GAO’s home page (http://www.gao.gov). Please call me at (213) 830-1065 or Tom Broderick, Assistant Director, at (202) 512-8705 if you or your staff have any questions about the information in this report. Key contributors to this report were Marian Cebula, Cary Chappell, Mary Ellen Chervenic, Perry Datwyler, Taya Tasse, and Jack Warner. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
In examining the efforts of the Departments of Education, Housing and Urban Development, and Transportation to ensure that recipients corrected single audit report findings, GAO found that each agency had procedures for obtaining and distributing the audit reports to appropriate officials for action. However, they often did not issue the required written management decisions or have documentary evidence of their evaluations of and conclusions on recipients' actions to correct the audit findings. In addition, program managers did not summarize and communicate information on single audit results and recipient actions to correct audit findings to agency management.
Breast-conservation therapy involves a number of physician decisions not required for mastectomy, including the selection of patients for breast conservation, the amount of tissue to be removed from the area surrounding the tumor, the details of administering radiation, and so forth. (See Sacks and Baum, 1993; Winchester and Cox, 1992; Harris et al., 1990; NIH, 1991.) And since breast-conservation therapy involves radiation, its implementation would logically vary depending upon the availability of appropriate radiation equipment and expertise in operating that equipment. Breast-conservation therapy also requires “careful long-term breast monitoring” in order to identify and treat local recurrences in the breast that was subjected to lumpectomy (NIH, 1991). All these treatment-implementation factors can potentially affect breast-conservation patients’ survival—and may not be the same in randomized studies and in medical practice. At least, the typical treatments given in day-to-day medical practice could fall short of the presumably consistent and high-quality treatments provided by a single prestigious research center, such as the National Cancer Institute (NCI). Some randomized studies are conducted at single centers, while others are conducted at diverse sites (that is, multiple centers). To more closely approximate day-to-day medical practice, multicenter studies have, in some instances, intentionally involved “community surgeons.” For this reason—and also because the treatments given in multicenter studies may vary from one center to another—multicenter studies’ results may more closely approximate results in medical practice than the results of single-center studies at prestigious institutions. But unlike medical practice, both single-center and multicenter studies stipulate that participating physicians follow a set of prespecified procedures. The question remains, then, as to whether or not breast conservation therapy has produced results similar to mastectomy in day-to-day medical practice. Randomized clinical studies are the “gold standard” of medical research. Random assignment essentially equates patients in the two treatment groups. Because the two groups should not differ on variables related to cancer survival, their outcomes can be directly compared, and any difference in survival can be attributed to the difference in treatment. In contrast, the statistical analysis of cases from a medical practice database represents a potential “window” on how well breast-conservation therapy has, in fact, worked in community medical practice. But the results of such analyses may be less conclusive because of their vulnerability to hidden selection bias. (See Byar, 1980; Office of Technology Assessment, 1994.) Briefly, in day-to-day medical practice, patients and physicians freely choose between treatments; a database analyst must, therefore, attempt to control for the potentially differing characteristics of patients who received breast-conservation therapy and those who received mastectomy. In this report, we have made all possible efforts to minimize the impact of selection bias, as described below. The analyses presented here are based on a unique combination of meta-analysis (to summarize randomized studies’ results), statistical analysis of records from a medical practice database, and cross design comparison of results. To our knowledge, this is the first time such an approach has been used in the area of breast cancer treatment. In all analyses presented here, breast-conservation therapy is defined as including lumpectomy, nodal dissection, and radiation. With respect to time frame, the randomized studies enrolled and treated patients from 1972 to 1989, and the medical practice cases selected for this analysis were diagnosed from 1983 to 1985. Because of limitations in the medical practice database (discussed in appendix I), all our analyses use the outcome criterion of 5-year survival and examine node-negative patients only. The medical practice data were drawn from the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER) database. SEER archives records for almost all cancer patients residing in five states— Connecticut, Hawaii, Iowa, New Mexico, and Utah—and four metropolitan areas—Atlanta, Detroit, San Francisco-Oakland, and Seattle -Puget Sound. (See Hankey et al., 1992.) Our analysis consisted of three major steps. In step 1, we performed a meta-analysis to summarize randomized studies’ results and obtain summary figures that can be compared to medical practice results. We conducted meta-analyses separately for the single-center studies and for the more generalizable multicenter studies to determine if similarity of survival following breast-conservation therapy and mastectomy holds for both kinds of studies. In step 2, we obtained information on the survival of breast-conservation and mastectomy patients in day-to-day medical practice. Specifically, from the SEER database, we drew records for a relatively homogeneous set of patients who, on the basis of several characteristics, were comparable to those enrolled in randomized studies. For this group of SEER patients, we conducted an analysis of survival following breast-conservation therapy and mastectomy. SEER results were adjusted for tumor size and several other variables so that patients who had received breast-conservation therapy would be “matched” to those who had received mastectomy. The matching was intended to minimize the effects of differing characteristics of patients who received breast-conservation therapy and mastectomy. In addition, a sensitivity analysis was performed to check for selection bias on life-threatening factors unrelated to cancer (such as heart-disease). In step 3, we compared (1) the summary results for the single-center and multicenter randomized studies to (2) the results of our analysis of cases selected from the SEER medical practice data. We also considered the logic of our analyses and, in particular, whether the resulting evidence was sufficient to conclude that—in day-to-day medical practice— breast- conservation therapy has been followed by survival similar to that observed for mastectomy. Throughout step 3, we drew on the principles of “cross design synthesis.” In this report, we use the term “similar” when the observed difference between the survival rates (1) is not statistically significant and (2) has an absolute value of less than 1.5 percentage points. Conversely, when a comparison of survival rates shows a difference of 1.5 percentage points or larger—and that difference is also statistically significant—we state that one rate is higher (or lower) than the other. Step 1 (the analysis of randomized studies) began with the identification of relevant single-center and multicenter studies through bibliographic searches and a survey of U.S. breast cancer researchers. Our inclusion criteria were as follows: randomization of enrolled patients to alternative treatments—breast-conservation therapy or mastectomy; breast-conservation therapy that included lumpectomy, nodal dissection, and radiation; no confounding treatments (such as the administration of an additional therapy to one treatment group); availability of 5-year survival rates by treatment group among node-negative patients (either previously published in a scholarly research journal or provided at our request); and published in English (if a non-U.S. study). Six studies—three single-center and three multicenter studies—met these criteria. (See table 1.) Almost 2,500 node-negative breast cancer patients were enrolled and treated in these randomized studies. The treatment effect—that is, the effect of breast-conservation therapy relative to mastectomy—is represented by a comparison of survival following breast-conservation therapy to survival following mastectomy. (See table 2.) by subtracting the 5-year survival rate for mastectomy patients from the 5-year survival rate for breast-conservation patients to determine the difference between the rates; and by calculating the odds ratio (dividing the odds of surviving with breast-conservation therapy by the odds of surviving with mastectomy). As indicated in table 2, the breast-conservation and mastectomy treatment groups experienced similar survival rates in each of the studies, and the odds ratios are close to 1 (the point of equivalence). The confidence intervals for the odds ratios all overlap 1, indicating no statistically significant difference in survival odds for the two treatments. However, the confidence intervals surrounding these estimates are quite broad, indicating a lack of precision in the individual-study estimates. (The U.S.-NSABP figures in table 2 are taken from recalculations published by an NCI contractor in March 1994. The recalculations were published following charges of fraudulent data collection at one U.S.-NSABP center; they exclude the data from that center.) A meta-analysis combining the results for node-negative patients across studies gives more precise estimates of the treatment effect. Table 3 shows meta-analysis results summarizing the treatment effect for single-center studies, multicenter studies, and both types of studies taken together. In addition, table 3 shows meta-analysis results calculated in two ways: (1) including the U.S.-NSABP recalculations published in March 1994 and (2) omitting U.S.-NSABP results entirely. Similar rates of 5-year survival characterized the breast-conservation therapy and mastectomy groups—not only in single-center studies (93.7 percent for breast-conservation patients and 93.7 percent for mastectomy patients), but also in multicenter studies, which may more closely approximate medical practice (88 percent for breast conservation and 88 percent for mastectomy, omitting U.S.-NSABP). Again, the odds ratios are close to 1, and the confidence intervals all overlap 1, indicating no statistically significant difference for any group of randomized studies. Finally, referring again to tables 2 and 3, the 5-year survival rates appear to be higher in single-center studies than in multicenter studies. This could be because of more effective treatments in single-center studies, varying tumor-size limits across the studies, or hidden cross-study differences in patient prognoses prior to treatment. (Step 3 presents more precise comparisons of combined-treatments survival rates.) Because the purpose of this report is to determine whether the treatment effect in day-to-day medical practice corresponds to the treatment effects observed in the single-center and multicenter studies, we would ideally “compare like with like.” Therefore, step 2 (analysis of the medical practice data) began with the selection of SEER patients who, on the basis of their characteristics, would have been covered by randomized studies. Table 4 shows the specific criteria we used in selecting SEER cases; the resulting SEER dataset included 5,326 cases that we believe are at least roughly comparable to the participants in randomized studies. (Appendix I assesses the kinds of patients who participated in randomized studies and discusses SEER cases lost to follow-up.) As described below, our statistical analysis of the selected SEER cases used “propensity-score” adjustments (Rosenbaum and Rubin, 1984) that essentially “matched” the kinds of patients who received breast-conservation therapy and mastectomy on demographic characteristics and tumor size. Using these adjustments, we found that, on average, similar patient survival followed the two treatments. To achieve matched groups of patients for the two treatments, the 5,326 SEER cases were first divided into five quintiles, as shown in table 5. Patients were assigned to these quintiles based on their propensity scores, which were calculated to indicate each patient’s likelihood of receiving breast-conservation therapy. Patients in the first quintile shown in table 5 have very low propensity scores; that is, they are the kinds of patients who were quite unlikely to receive breast-conservation therapy. (An example of a patient with an extremely low propensity score would be a woman in her sixties, living in Iowa, diagnosed in 1983—the earliest year examined here—with a tumor sized 3 to 4 cm.) By contrast, patients assigned to each successive quintile were more likely to receive breast-conservation therapy. (An example of a patient with a relatively high propensity score would be under 40 years old, non-Asian, living in the San Francisco- Oakland or the Seattle-Puget Sound area and diagnosed in 1985—the most recent year examined—with a very small tumor.) In table 5, 5-year survival estimates are shown separately for breast- conservation patients and for mastectomy patients in each quintile. Within each quintile, patients are homogeneous, and the survival rates for the two treatments represent an estimate of the treatment effect for that quintile. The bottom rows of table 5 show the overall survival rates used to calculate the treatment effect for all selected SEER cases taken together. These summary rates, which are termed “adjusted across quintiles,” are clearly similar to each other: 86.3 percent for breast-conservation therapy and 86.9 percent for mastectomy. The adjusted breast-conservation rate (86.3 percent) was calculated by combining the five separate quintile survival rates for breast-conservation patients—giving each of the five rates an equal weight of one-fifth. The adjusted mastectomy rate (86.9 percent) was calculated using analogous procedures. Thus, the adjusted survival rates are based on “matched” treatment groups; that is, the kinds of patients who were unlikely to receive breast-conservation therapy contribute equally to the breast- conservation and the mastectomy survival estimates—as do the kinds of patients who were much more likely to receive breast- conservation therapy. In this way, selection bias on measured variables was minimized. As shown in table 6, the difference between the adjusted 5-year survival estimates for breast-conservation and mastectomy patients is just six-tenths of a percentage point, the odds ratio is relatively close to 1, and the confidence interval overlaps 1, indicating no statistically significant difference. Thus, on average, the two treatments appear to produce similar results in day-to-day medical practice. However, referring again to table 5, the results shown for quintile 3 do not meet our criteria for use of the term “similar” because the observed (nonsignificant) difference between the survival rates is greater than 1.5 percentage points. According to our criteria, this nonsignificant pattern should be regarded as inconclusive. The propensity-score adjustments were intended to minimize selection bias on measured variables, such as tumor size and demographic characteristics. However, noncancer-related life-threatening illnesses or conditions, such as serious heart disease, were not measured in the SEER data and therefore could not be included in the propensity score. Such illnesses or conditions might at once influence treatment selection and limit 5-year survival—and could represent a form of selection bias not accounted for by the propensity scores. SEER data does, however, include codes for cause of death. Therefore, it was possible to check for selection bias on illnesses and conditions not related to cancer in the following way: We performed a sensitivity analysis in which we reproduced table 5 omitting patients who were coded as having died of illnesses and conditions unrelated to cancer within the 5-year interval. As indicated in table 7, with those patients omitted, the difference in survival following breast-conservation therapy and mastectomy is, on average, again within 1.5 percentage points of zero, and it is not statistically significant. At the same time, however, the breast-conservation and mastectomy survival rates within each of the first three quintiles fall short of our criteria for similarity; specifically, although the differences between the breast-conservation and mastectomy survival rates for these quintiles are not statistically significant, each is slightly larger than 1.5 percentage points. According to our criteria, the separate results for quintiles 1 through 3 are inconclusive. Yet when results for these quintiles are considered together—and compared to the results for quintiles 4 and 5—there are two potential implications: (1) breast-conservation therapy may not have been quite as effective as mastectomy for some of the patients who were less likely to receive it—such as those who resided in “low-lumpectomy” areas (in which breast-conservation therapy was relatively uncommon); and (2) breast conservation has been at least as effective as mastectomy for those who were most likely to receive it. There are various possible explanations for this nonsignificant pattern, based on the different components of the propensity score. (See appendix I.) However, at the present time, exploratory analyses would be difficult, at best, because within the rather homogeneous group of patients examined in this report, there is a relatively small number of breast-conservation patients (1,072) and only about one-third of them (340) fall into quintiles 1 through 3. Step 3 consists of cross design comparisons and a consideration of the evidence. An informal comparison of the summary results for step 1 and step 2 suggests that the average treatment effect estimated in the statistical analysis of selected SEER cases is similar to the effects observed in the single-center and multicenter randomized studies. The more precise comparisons in tables 8 and 9 show that, quantitatively, this is indeed the case. But do these data constitute sufficient evidence to conclude that the effectiveness of breast-conservation therapy in day-to-day medical practice really is, at least on average, similar to its effectiveness in randomized studies? To address this issue, we considered (1) the potential differences distinguishing the SEER analysis from single-center and multicenter randomized studies (including the potential for hidden selection bias in the SEER analysis) and (2) the impact that these potential differences might have on the treatment effects we observed. We then used an additional type of cross design comparison as a validity check. Three potential cross design differences could affect comparisons of the treatment effect estimated for the SEER medical practice data to the treatment effects observed in single-center and multicenter randomized studies. These are potential differences in actual treatment effectiveness (SEER versus single-center and multicenter studies), potential differences in patients (again, SEER versus single-center and multicenter studies), which might be related to differences in treatment effectiveness, and lack of randomization in the SEER data versus randomization in the single-center and multicenter studies—which could lead to differences in the estimates of treatment effectiveness. Each of these potential differences could affect the comparison of treatment effects (SEER versus single-center and multicenter studies) in the following ways: If there are real differences in treatment effectiveness (for example, if breast-conservation therapy is less effective than mastectomy in day-to-day medical practice), this would affect the comparison of effects—SEER versus randomized studies. (This is, in fact, the hypothesis we have sought to test.) If there are differences in patients—again SEER cases versus randomized studies—this also could affect the comparison of effects, but only if breast-conservation therapy is, in fact, more or less effective for the particular kinds of patients who were included in the SEER analysis than for the kinds of patients included in the randomized studies. And, the lack of randomization in the SEER data could affect our estimate of the treatment effect in day-to-day medical practice if (1) the kinds of SEER patients who were selected for one treatment had better prognoses than those selected for the other treatment—and (2) this was not corrected as part of our analysis. In the foregoing analyses, our intent was to test for whether the effectiveness of breast-conservation therapy relative to mastectomy was indeed the same in day-to-day medical practice as in single-center and multicenter randomized studies. In comparing the effect of nominally identical treatments across designs, our goal was to identify the first type of difference listed above. We therefore attempted to minimize the influence of each of the other two potential differences. With respect to differences in patients, we selected SEER patients that were at least roughly comparable to those treated in the randomized studies. With respect to selection bias, the fact that we began with a homogeneous group of SEER patients (node-negative, tumors 4 cm or less, age 70 or younger) argues against substantial amounts of bias. We used the propensity-score method to minimize bias on tumor size and on other measured variables. We also conducted a sensitivity analysis to check for selection bias on life-threatening diseases or conditions other than cancer (for example, heart disease)—and found none. Nevertheless, we realize that despite such efforts, some patient differences or some degree of hidden selection bias can persist. The similarity of the average treatment effect observed for the SEER medical practice data and the effects observed for the randomized studies (that is, the results shown in tables 8 and 9) argue that none of the potential differences listed above had a major impact. The most parsimonious interpretation of the data presented in tables 8 and 9 is that breast-conservation therapy is, on average, similarly effective to mastectomy in day-to-day medical practice. Logically, however, it is also possible that if two of three potential cross design differences occurred simultaneously, they could “balance each other out” to produce a false impression of similar treatment effects across designs. Of particular relevance is the possibility that hidden selection bias in the SEER data analysis (specifically, a hidden bias toward selecting better-prognosis patients for breast conservation) could “counterbalance” treatment differences (specifically, less effective breast-conservation therapy in medical practice)—and thus create an impression of similar treatment effects across study designs. We reasoned that an additional indication of the relative effectiveness of treatments across designs would be afforded by a comparison of (1) the combined-treatments survival rate for the SEER analysis to (2) the corresponding rates for single-center and multicenter studies. Logically, the SEER combined-treatments survival rate is not affected by internal selection bias. Thus, if the SEER rate proved to be similar to the corresponding rates in single-center and multicenter studies, this would point to minimal differences both in patients and in treatment effectiveness across the designs. In short, similar combined-treatments survival rates for the selected SEER cases and for a set of randomized studies would support the conclusion of similar overall effectiveness of breast conservation in day-to-day medical practice and in the randomized studies. In contrast, if the SEER combined-treatments survival rate proved to be different from the corresponding rates for randomized studies, a number of interpretations would be possible—including a difference in patients as well as a difference in the general quality of treatments being given. In comparing combined-treatments survival rates across studies, it is necessary to take account of any differences in tumor size—specifically, any differences between the tumor sizes of the selected SEER patients and the patients in the single-center and multicenter randomized studies. This is because tumor size is related to patient survival. As previously noted, four of the six randomized studies had a tumor-size limit of 4 cm, whereas two studies had a limit of 2 cm; the roughly comparable set of SEER patients had a tumor-size limit of 4 cm. The comparison of combined-treatments survival rates is easiest to make for SEER data versus multicenter studies. This is because all multicenter studies had, in effect, the same tumor-size limit (4 cm) and the SEER cases selected for our analyses were also subjected to the 4-cm limit. Therefore, in this section, we separately discuss (1) the comparison of the SEER combined-treatments survival rate to the combined-treatments survival rate for multicenter studies and (2) the corresponding comparison for SEER and single-center studies. Table 10 (first row) shows the combined-treatments 5-year survival rate for the full set of SEER cases used in the foregoing analyses; this rate—86.9 percent (or 87 percent, rounded)—is appropriate for comparison to the multicenter studies. As shown in the bottom row of table 11, the difference in rates is only 1 percentage point and is not significant. The most parsimonious explanation of this result is that, at least on average and with respect to 5-year survival, there are (1) no substantial differences between the patients in our SEER analysis and the patients in the multicenter studies and (2) no large difference between the effectiveness of breast-conservation therapy or mastectomy across the two types of analyses. The comparisons are more complex for SEER versus the single-center studies because two of the three studies had a 2-cm limit. The SEER weighted composite estimate in the last row of table 10 (89.4 percent) combines (1) the survival estimate for the full set of selected SEER cases (4-cm limit) with (2) the survival estimate for the subset of cases defined with a 2-cm limit. (See table 10, note c.) This survival estimate is appropriate for comparison to the single-center studies’ estimate (93.7 percent). From table 11, it is clear that with breast-conservation and mastectomy patients taken together, the 5-year survival rate for patients in single-center randomized studies is higher than the rate for the corresponding SEER estimate—by a difference of 4.3 percentage points, which is statistically significant. The meaning of this finding is unclear. It could be explained by the argument that implementations of treatments in single-center studies are generally better than implementations in multicenter studies or day-to-day medical practice (which seems to be logical). But it could also be explained by hidden selection of patients with better prognoses for the single-center studies. In this report, we examined the relative effectiveness of breast- conservation therapy and mastectomy for patients treated in three contexts: single-center randomized studies, multicenter randomized studies, and day-to-day medical practice. In each context, the summary data indicated that 5-year survival was similar following the two alternative treatments. The best outcomes for both treatments occurred in the single-center studies; however, outcomes for the SEER medical practice patients were comparable to outcomes in the multicenter studies. We recognize that database analyses are vulnerable to hidden selection bias. But we believe such bias is likely to be minimal in the SEER analyses presented here because (1) a homogeneous group of patients was examined, (2) careful adjustments were made for differences in tumor size and demographic characteristics (using the propensity-score method), and (3) a check for possible selection bias on life-threatening factors unrelated to cancer (such as heart disease) reaffirmed our initial conclusion. In addition, the fact that the combined-treatments survival rate was similar in multicenter studies and in the SEER data points to similar levels of treatment effectiveness across these two designs. We caution that this analysis does not prove the absence of selection bias in the SEER analysis—and that these results are limited to the patient population, treatments, and outcome that we were able to examine empirically. Nevertheless, virtually all the evidence that we were able to examine pointed toward the similarity of patient survival following breast-conservation and mastectomy—in day-to-day medical practice as well as in the randomized studies. Only one caveat was suggested by the results of our analyses: A minority of breast-conservation patients—the kinds of patients for whom breast-conservation therapy was relatively unlikely to be used (based on factors such as residence in areas where breast-conservation is relatively uncommon) but who nevertheless did receive it—may have achieved slightly better results with mastectomy. The observed difference, however, was not statistically significant. This report does not examine agency programs; thus, we did not request agency comments. However, we obtained reviewer comments from staff at the National Cancer Institute and the Agency for Health Care Policy and Research; from a number of university-based researchers with expertise in statistics, research methods, or breast cancer; and from investigators in charge of each of the randomized studies. (See appendix II.) We will be sending copies of this report to the Director of the National Cancer Institute and to other interested parties. We will also make copies available upon request. If you have any questions, please call me at (202) 512-2900, or call Robert L. York, Director of Program Evaluation in Human Services Areas, at (202) 512-5885 or Judith A. Droitcour, Assistant Director, at (202) 512-5885. Major contributors to this report are listed in appendix III. Some of the tables in this report present 95-percent confidence intervals in addition to point estimates. These intervals reflect the fact that estimates of the parameter in question (for example, the odds ratio) might fluctuate because of random variation in the data. If the 95-percent confidence interval for an odds ratio includes 1 (the point of equivalent odds), there is no statistically significant difference (at the .05 level) between the odds of survival following breast-conservation therapy and the odds of survival following mastectomy. Similarly, if the 95-percent confidence interval for a difference in percentages includes 0 (the point of equivalence), there is no statistically significant difference between the percentages being compared (at the .05 level). A statistically significant difference is one that is not likely to have occurred by chance alone. The utility of confidence intervals and significance tests is not limited to randomly selected samples. (See Winch and Campbell, 1969.) In comparing patient survival rates—for example, in comparing the 5-year survival rate for breast-conservation patients to the corresponding rate for mastectomy patients—we termed the two rates “similar” when the observed difference between rates was less than 1.5 percentage points (absolute value), and that difference in rates was not statistically significant. (See table I.1.) Nonsignificant pattern (inconclusive owing to a lack of significance) When a comparison of survival rates showed a difference of 1.5 percentage points or larger—and that difference was statistically significant—we used the terms “higher” and “lower.” When survival rates differed by 1.5 percentage points or more—but statistical significance was not attained—we termed the result a nonsignificant pattern. (A nonsignificant pattern is considered inconclusive because of the lack of statistical significance. See table I.1.) This approach recognizes that a high degree of statistical power is required to detect significant differences as small as 1.5 percentage points.Without a high degree of statistical power, we believe it would be inappropriate to term results “similar” merely because of a failure to find a significant difference. With respect to the remaining possibility depicted in table I.1—a difference of less than 1.5 percentage points that is statistically significant—we note that this would not occur except where extremely large samples allowed very precise estimates. Were any findings to fall into this category, the conclusion would be that a real, although relatively small, difference does exist—and has been estimated very precisely. A size-of-difference criterion (cutting point) was used because of the relative imprecision of the estimates, given the existing studies and data. We wished to choose a cutting point that, in our judgment, would represent a difference in survival rates that could reasonably be considered “similar.” Thus, we rejected potential cutting points that seemed too high (such as 5 percentage points) because we believed most patients would not consider survival rates that differed by that amount to be similar. In this context, a criterion of 1 percentage point or less versus a larger difference initially seemed reasonable. We chose 1.5 as the specific cutting point (that is, a difference of less than 1.5 percentage points versus 1.5 or greater) because it was possible to obtain most, though not all, survival estimates rounded to the nearest tenth of a percent. patients in each treatment group. In the area of breast-conservation therapy and mastectomy, the samples in the randomized studies—and in the database analysis presented here—fall short of this number. recognize that it is, to some extent, arbitrary. We do not mean to imply that this figure represents the point at which a particular physician or patient would distinguish between a “meaningful difference” and an irrelevant one. We are also cognizant of the fact that, for every 10,000 patients who receive a treatment characterized by even a 1-percentage-point lower survival rate than an available alternative treatment, there would be 100 deaths that could have been avoided by choosing the other treatment—provided that the observed 1-percentage-point difference is, in fact, a real difference and not merely the result of random variation. In this report, we have avoided use of the term “equivalent” to describe the survival rates observed for breast-conservation and mastectomy patients. A technical reason for this is that to claim “equivalent” survival following the two treatments would require the confidence interval surrounding the difference to be so small that it could be entirely enclosed by a prespecified interval—specifically, one defined such that all values within it would be justifiable as clinical equivalence. (See Fleiss, 1992.) That is, not only would we have to justify a difference of 1.5 percentage points as clinically equivalent, but both the upper and lower bounds of the confidence interval surrounding our estimate of the difference would have to be within 1.5 percentage points of zero. This degree of precision would only be possible with very large samples. This section (1) describes our methods of combining randomized studies’ results, including the use of “effective n’s” and rounding rules; (2) describes the patients included in the six randomized studies that met our criteria; and (3) briefly discusses the two English studies that were omitted from our analyses because they did not meet our treatment criteria. We conducted the meta-analysis of six randomized studies primarily to produce information that could be compared to the separate statistical analysis of selected cases from the SEER database. We began our work for the meta-analysis of randomized studies’ results by calculating for each randomized study an odds ratio for 5-year survival (because the outcome criterion for the SEER analysis was 5-year survival). Then we tested for the homogeneity of the odds ratios and, because no significant heterogeneity was found, combined them in a common odds ratio. Specifically, we used the Mantel-Haenszel (1959) method and the STAT XACT program produced by Cytel Software of Cambridge, Massachusetts. STAT XACT uses the Breslow-Day (1980) method of testing for homogeneity of odds ratios. The confidence intervals surrounding the odds ratios were also calculated using the STAT XACT program and are based on the variance estimation method of Robins, Breslow, and Greenland (1986). Three of the six randomized studies—the Milan study, the French study conducted at the Institut Gustave-Roussy, and the U.S.-NSABP—had both (1) started long enough ago that, except for patients lost to follow-up, all had been followed for 5 years and (2) calculated recent estimates of 5-year survival for node-negative patients. Thus, for these three studies, estimates of 5-year survival were based on 5 or more years of follow-up for all or almost all patients. For the other three studies (U.S.-NCI, Danish, and EORTC), the 5-year survival estimates were actuarial and included a more substantial number of patients who had not been followed for 5 years. To treat these actuarial estimates appropriately in our meta-analyses, we developed the following approach: obtain the standard errors of the actuarial estimates (that is, standard errors that take account of how long each patient has been followed up); calculate the “effective n” associated with each actuarial estimate, according to the formula shown by Cutler and Ederer (1958); multiply the actuarial estimate of 5-year survival by the effective n—thus obtaining the effective number who survived (and, by subtraction, died) in each treatment group of each study; and use these “effective n’s” in calculating the common odds ratio for the meta-analysis. Effective n’s for the three studies were calculated as shown in table I.2. The most precise estimates available were used. The combined-studies (and combined-treatments) survival estimates were, where possible, rounded to the nearest tenth of a percentage point. Because estimates for one multicenter randomized study (EORTC) were only available to the nearest full percentage point, summary figures involving that study were rounded to the nearest full percentage point—to avoid implying greater precision than was possible. Odds ratios were calculated using the most precise figures possible; however, in preparing the data from each randomized study, the number of patients who died within 5 years and the number who survived were calculated from reported percentages and then rounded to the nearest patient (whole number). Odds ratios, which were based on the rounded numbers of patients, were themselves rounded at the second decimal place. Differences between reported survival rates were calculated using the most precise figures possible—and then rounded for presentation in tables. Slight differences in results may have occurred because of rounding procedures and the use of “effective n’s” (described above). This description of the characteristics of patients who participated in randomized studies is based on published eligibility requirements as well as informal requirements identified through data on the kinds of patients that were actually included (which we obtained, as needed, by calling investigators). Briefly, all patients in the six randomized studies had invasive breast cancer. As shown in table I.3, almost all patients were age 70 or younger and had tumors of 4 cm or less. Two of the three single-center studies admitted only patients with tumors of 2 cm or less. Most randomized studies had numerous eligibility requirements in addition to the age and tumor-size limits. For the U.S. studies, these were as follows: U.S.-NCI. Tumor confined to breast and axillary nodes, no advanced local disease, no inflammatory carcinoma, no multiple masses or bilateral cancer, no Paget’s disease, no prior cancer. U.S.-NSABP. No fixation to underlying muscle or chest wall, no clinical evidence of skin involvement or distant metastases, no multiple masses (unless all but one proved benign), no prior cancer. With respect to type of breast cancer (histology), the U.S.-NCI randomized study further noted that almost all patients had infiltrating duct carcinoma. The Milan study also reported that a majority of patients had this type of cancer. Two English studies (Atkins et al., 1972; Hayward, 1981; Hayward and Caleffi, 1987) did not meet our treatment criteria because they did not include nodal dissection as part of the breast-conservation therapy that they provided. The two studies are unique in several ways and are therefore briefly discussed in this appendix. First, treatments given in the two English studies differed from treatments given in other randomized studies. As mentioned above, the 1961 and 1971 English studies did not perform nodal dissection on breast-conservation patients. In addition, they have been criticized for providing inadequate radiation (Harris et al., 1983). Second, patient survival rates appeared to be considerably lower than in the six studies that met our criteria. This suggests that patients in the English studies may have had poorer prognoses or been subjected to poorer treatment implementations, or both. Third, the two English studies were conducted earlier than the other studies. They began in 1961 and 1971, and the 1971 study used the same procedures as the 1961 study. The six studies in our analysis were begun between 1972 and 1983. Fourth, in the two English studies, the overall pattern indicated that lumpectomy was less effective than mastectomy. In the first English study, it was clear—early on—that clinically node-positive patients who received lumpectomy showed lower survival rates than those who received mastectomy. Therefore, only clinically node-negative patients were included in the second English study; however, the clinically node-negative breast-conservation patients in the second English study showed lower 5-year survival than corresponding mastectomy patients. And when the 10-year follow-up was completed for the first study, the clinically node-negative patients in that study also showed a pattern of higher survival with mastectomy than with lumpectomy. Although the English studies did not qualify for our analyses, we believe they are noteworthy in that they caution that there is at least some question as to whether breast-conservation therapy and mastectomy produce comparable survival results when treatment implementations are poorer or when patients have poorer prognoses. SEER began recording the type of surgery that breast cancer patients received for the cohort diagnosed in 1983. At the time we performed the analyses reported here, SEER follow-up was available through 1990. We therefore selected patients diagnosed from 1983 through 1985—all of whom could be followed for 5 years. The number of positive nodes was not recorded for these diagnostic cohorts. Because the number of positive nodes is a key prognostic factor for early-stage node-positive patients—and may also be associated with selection of surgery—we believe it is necessary for a statistical analysis aimed at minimizing selection bias among node-positive patients. Data on longer term survival and on node-positive patients are provided by randomized studies. As more SEER data become available, SEER analyses that cover node-positive patients and longer term survival will be possible. The SEER analyses presented in this report are based on 5,326 breast cancer patients. This dataset was formed by accessing the SEER database for 1983 to 1985 diagnoses and selecting patients who met the following criteria: no previous diagnosis with another cancer; type of treatment, disease-related, and demographic characteristics known; patient followed for 5-years or longer; node-negative invasive breast cancer that had not spread beyond the breast (no chest wall involvement, no skin involvement, no attachment to the pectoral muscle); tumor 4 cm or smaller; type of cancer: infiltrating duct carcinoma or adenocarcinoma (NOS); type of treatment: if breast-conservation therapy, lumpectomy with nodal dissection plus radiation; if mastectomy, no “outlier” treatments (that is, no subcutaneous mastectomy, no mastectomy without nodal dissection, no radical mastectomy, no mastectomy plus radiation); and age 70 or younger. In the resulting dataset, which included 5,326 patients, about 20 percent of patients received breast-conservation therapy; the remaining 80 percent received mastectomy. Preliminary analyses on a broader set of SEER patients included those that had been lost to follow-up before the requisite 5 years had elapsed following diagnosis (6.2 percent had been lost to follow-up). In these analyses, the patients who were followed for at least 5 years and those who were not, proved to be virtually identical with respect to both tumor size (the main prognostic factor for node-negative patients) and type of surgery. Specifically, Patients not followed had an average tumor size of 2 cm, as did those followed for all 5 years. Seventeen percent of the followed patients received breast-conservation therapy (as opposed to mastectomy), as did 17 percent of those lost to follow-up. To derive the propensity scores, we entered patient characteristics into a logistic regression model predicting selection for breast-conservation therapy. The six patient characteristics entered were year in which patient was diagnosed (time), geographic area of residence (place), size of the patient’s tumor, patient’s age at diagnosis, marital status, and race or ethnicity. Because the ultimate objective of the propensity-score analysis was to enhance equivalence of the two SEER treatment groups on all measured variables, all six variables were included in the final model. Five of the six variables did prove to significantly affect a patient’s probability of receiving breast-conservation therapy. The model also included one significant interaction term—the interaction of geographic area with diagnostic year. (See table I.4.) As expected, patients with smaller tumors were more likely to receive breast-conservation therapy than patients with larger tumors. However, the other patient characteristics determining selection for breast- conservation therapy argued against a unidimensional selection process in which patients with better prognoses are consistently selected for breast-conservation therapy. Notably, Patients under 40 had relatively high odds of receiving breast-conservation therapy, although there is some evidence that they may have less favorable prognoses than middle-aged patients (de la Rochefordiere et al., 1993). Asian women had lower odds than others of receiving breast-conservation therapy, although they may have somewhat better prognoses than other breast cancer patients. The propensity scores (probabilities of breast-conservation therapy obtained using the model in table I.4) for the SEER patients examined here ranged from .01 to .69. The propensity scores were used to create five quintiles, as suggested by Rosenbaum and Rubin (1984). The first quintile consists of patients who were least likely to receive breast-conservation therapy, whereas the fifth quintile consists of those who were most likely to receive it. Tumor size (cm) Interaction: DODY and registry (continued) –2 log likelihood = 4794.498 Chi-square = 646.975 with 27 df Selection for breast-conservation therapy (versus mastectomy) is predicted using date of diagnostic year (DODY, 1983 to 1985), SEER registry (geographic location), and patient characteristics. Breast-conservation therapy was coded 1 and mastectomy was coded 0. As intended, the propensity-score quintiles differentiated between patient subgroups; that is, major differences across the quintiles were apparent. Notably, half (51 percent) of quintile 1 patients (low probability of breast-conservation therapy) had tumors larger than 2 cm; whereas only 14 percent of quintile 5 had tumors of that size. With respect to geographic area, 70 percent of quintile 1 patients were from Iowa, metropolitan Detroit, or metropolitan Atlanta; by contrast, 73 percent of quintile 5 patients were from the San Francisco-Oakland or the Seattle-Puget Sound registries. Only 6 percent of quintile 1 patients were diagnosed in 1985, compared to 66 percent of quintile 5. Within each propensity-score quintile, we checked the breast-conservation therapy and mastectomy groups for equivalence on all six variables. No major differences were found; two relatively minor differences were adjusted for, as follows: First, with respect to tumor size, within four of five quintiles, a slightly higher proportion of mastectomy patients than breast-conservation patients had tumors larger than 2 cm. For example, within quintile 5, 15 percent of mastectomy patients had tumors larger than 2 cm, as compared to 12 percent of breast-conservation patients. We therefore adjusted results within each quintile so that the patients with larger tumors would contribute equally to the mastectomy survival estimate and to the breast-conservation survival estimate for that quintile. Second, with respect to year of diagnosis, within quintile 5 there was a significant difference between mastectomy patients and breast- conservation patients: 64 percent of the mastectomy patients in quintile 5 had been diagnosed in 1985 as compared to 70 percent of breast-conservation patients. Although year of diagnosis is not generally associated with differences in patient survival, we took the precaution of adjusting results for quintile 5 so that patients diagnosed in 1985 would contribute equally to that quintile’s breast-conservation survival estimate and its mastectomy survival estimate (as would patients diagnosed in 1984 and 1983). Using the quintiles together with the additional adjustments ensures that the comparison between survival rates following breast-conservation therapy and mastectomy is based on patient groups that were adjusted to be as “equivalent” as possible on all relevant measured variables. The experts listed here commented on one or more drafts of the report or advised us on the methods used in our analyses, or both. We are grateful for the gracious contributions of all these individuals. Altman, Roberta, and Michael Sarg. The Cancer Dictionary. New York: Facts on File, 1992. Andersen, Knud West. Personal communication. Danish Breast Cancer Cooperative Group, Aug. 10, 1993. Atkins, Sir Hedley, et al. “Treatment of Early Breast Cancer: A Report after Ten Years of a Clinical Trial,” British Medical Journal, 2:423-29, 1972. Blichert-Toft, M., et al. “Danish Randomized Trial Comparing Breast Conservation Therapy with Mastectomy: Six Years of Life-Table Analysis,” Journal of the National Cancer Institute Monographs, 11:19-25, 1992. Blichert-Toft, M., et al. “A Danish Randomized Trial Comparing Breast-Preserving Therapy with Mastectomy in Mammary Carcinoma: Preliminary Results,” Acta Oncologica, 27(Fasc. 6a):671-77, 1988. Breslow, N.E., and N.E. Day. Statistical Methods in Cancer Research: Volume 1 - The Analysis of Case Control Studies. Lyon, France: International Agency for Research on Cancer (Pub. No. 32), 1980. Byar, David P. “Why Data Bases Should Not Replace Randomized Clinical Trials,” Biometrics, 36:337-42, 1980. Cutler, Sidney J., and Fred Ederer. “Maximum Utilization of the Life Table Method in Analyzing Survival,” Journal of Chronic Diseases, 8:699-712, 1958. de la Rochefordiere, Anne, et al. “Age as a Prognostic Factor in Premenopausal Breast Carcinoma,” Lancet, 341:1039-43, 1993. Dickersin, K., and J. Berlin. “Meta-Analysis: State-of-the-Science,” Epidemiologic Reviews, 14:154-76, 1992. Droitcour, Judith A., George Silberman, and Eleanor Chelimsky. “Cross Design Synthesis: A New Form of Meta-analysis for Combining Results from Randomized Clinical Trials and Medical-Practice Databases,” International Journal of Technology Assessment in Health Care, 9(3):440-49, 1993. Ellenberg, Susan S. “Meta-Analysis: The Quantitative Approach to Research Review,” Seminars in Oncology, 15:472-81, 1988. Fleiss, Joseph L. “General Design Issues in Efficacy, Equivalency, and Superiority Trials,” Journal of Periodontal Research (special issue), 27:306-13, 1992. Freidlin, Boris. Personal communication. EMMES Corp., June 23, 1994. GAO. (See U.S. General Accounting Office.) Hankey, Benjamin F., et al. “Overview.” In Barry A. Miller et al., Cancer Statistics Review, 1973-1989 (NIH Pub. No. 92-2789). Bethesda, Md.: National Institutes of Health, 1992. Harris, Jay R., et al. “Conservative Surgery and Radiotherapy for Early Breast Cancer,” Cancer, 66 (Sept. 15 Supp.):1427-38, 1990. Harris, Jay R., Samuel Hellman, and William Silen (eds.). Conservative Management of Breast Cancer: New Surgical and Radiotherapeutic Techniques. Philadelphia: J.B. Lippincott, 1983. Hayward, John L. “The Guy’s Hospital Trials on Breast Conservation.” In Jay R. Harris, Samuel Hellman, and William Silen (eds.), Conservative Management of Breast Cancer: New Surgical and Radiotherapeutic Techniques. Philadelphia: J.B. Lippincott, 1983, pp. 77-90. Hayward, John, and Maira Caleffi. “The Significance of Local Control in the Primary Treatment of Breast Cancer,” Archives of Surgery, 122:1244-47, 1987. Kahn, Harold A., and Christopher T. Sempos. Statistical Methods in Epidemiology. New York: Oxford University Press, 1989. Lee-Feldstein, Anna, Hoda Anton-Culver, and Paul J. Feldstein. “Treatment Differences and Other Prognostic Factors Related to Breast Cancer Survival: Delivery Systems and Medical Outcomes,” Journal of the American Medical Association, 271(15):1163-68, 1994. Lichter, Allen S., et al. “Mastectomy Versus Breast-Conserving Therapy in the Treatment of Stage I and II Carcinoma of the Breast: A Randomized Trial at the National Cancer Institute,” Journal of Clinical Oncology, 10(6):976-83, 1992. Louis, Thomas A., Harvey V. Fineberg, and Frederick Mosteller. “Findings for Public Health From Meta-Analyses,” Annual Review of Public Health, 6:1-20, 1985. Mignolet, Francoise. Personal communication. Brussels: EORTC Data Center, Oct. 12, 1994. Mosteller, Frederick, and John W. Tukey. Data Analysis and Regression. Reading, Mass.: Addison-Wesley, 1977. National Institutes of Health. “Early-Stage Breast Cancer—NIH Consensus Conference,” Journal of the American Medical Association, 265(3):391-95, 1991. Office of Technology Assessment. (See U.S. Congress, Office of Technology Assessment.) Percy, Constance, Valerie Van Holten, and Calum Muir. International Classification of Diseases for Oncology, 2nd ed. Geneva: World Health Organization, 1990. Robins, J., N. Breslow, and S. Greenland. “Estimators of the Mantel- Haenszel Variance Consistent in Both Sparse Data and Large-Strata Limiting Models,” Biometrics, 42:311-23, 1986. Rosenbaum, Paul R., and Donald B. Rubin. “Reducing Bias in Observational Studies Using Subclassification on the Propensity Score,” Journal of the American Statistical Association, 79(387):516-24, 1984. Rubin, Donald B., and N. Thomas. “Characterizing the Effect of Matching Using Linear Propensity Score Methods with Normal Distributions,” Biometrika, 79(4):797-809, 1992. Sacks, Nigel P.M., and M. Baum. “Primary Management of Carcinoma of the Breast,” Lancet, 342:1402-08, 1993. Sarrazin, Daniele. Personal communication. Institut Gustave-Roussy, Aug. 6, 1993. Sarrazin, Daniele, et al. “Ten-year Results of a Randomized Trial Comparing a Conservative Treatment to Mastectomy in Early Breast Cancer,” Radiotherapy and Oncology, 14:177-84, 1989. Sarrazin, Daniele, et al. “Conservative Treatment Versus Mastectomy in Breast Cancer Tumors with Macroscopic Diameter of 20 Millimeters or Less: The Experience of the Institut Gustave-Roussy,” Cancer, 53:1209-13, 1984. Sarrazin, Daniele, et al. “Conservative Treatment Versus Mastectomy in T1 or Small T2 Breast Cancer—A Randomized Clinical Trial.” In Jay R. Harris, Samuel Hellman, and William Silen (eds.), Conservative Management of Breast Cancer: New Surgical and Radiotherapeutic Techniques. Philadelphia: J.B. Lippincott, 1983, pp. 101-11. Stablein, D.M. Personal communication. EMMES Corp., June 10, 1994a. Stablein, D.M. A Reanalysis of NSABP Protocol B06: Final Report. Potomac, Md.: EMMES Corp., 1994b. Steinberg, Seth M. Personal communication. National Cancer Institute, July 13, 1993. Straus, K., et al. “Results of the National Cancer Institute Early Breast Cancer Trial,” Journal of the National Cancer Institute Monographs, 11:27-32, 1992. Swanson, G. Marie, et al. “Trends in Conserving Treatment of Invasive Carcinoma of the Breast in Females,” Surgery, Gynecology & Obstetrics, 171:465-71, 1990. U.S. Congress, Office of Technology Assessment. Identifying Health Technologies That Work: Searching for Evidence, OTA-H-608. Washington, D.C.: U.S. Government Printing Office, 1994. U.S. General Accounting Office. Cross Design Synthesis: A New Strategy for Medical Effectiveness Research (GAO/PEMD-92-18). Washington, D.C.: U.S. General Accounting Office, 1992. van Dongen, J.A., et al. “Factors Influencing Local Relapse and Survival and Results of Salvage Treatment after Breast-Conserving Therapy in Operable Breast Cancer: EORTC Trial 10801, Breast Conservation Compared With Mastectomy in TNM Stage I and II Breast Cancer,” European Journal of Cancer, 28A(4-5):801-05, 1992a. van Dongen, J.A., et al. “Randomized Clinical Trial to Assess the Value of Breast-Conserving Therapy in Stage I and II Breast Cancer, EORTC 10801 Trial,” Journal of the National Cancer Institute Monographs, 11:15-18, 1992b. Veronesi, Umberto. Personal communication. Istituto Europeo di Oncologia, June 21, 1994. Veronesi, Umberto. “Local Control and Survival in Early Breast Cancer: The Milan Trial,” International Journal of Radiation: Oncology— Biology—Physics, 12:717-20, 1986a. Veronesi, Umberto, et al. “Comparison of Halsted Mastectomy with Quadrantectomy, Axillary Dissection, and Radiotherapy in Early Breast Cancer: Long-Term Results,” European Journal of Cancer and Clinical Oncology, 22(9):1085-89, 1986b. Veronesi, Umberto, et al. “Comparing Radical Mastectomy with Quadrantectomy, Axillary Dissection, and Radiotherapy in Patients with Small Cancers of the Breast,” New England Journal of Medicine, 305(1):6-11, 1981. Winch, Robert, and Donald Campbell. “Proof? No. Evidence? Yes. The Significance of Significance Tests,” American Sociologist, May 1969:140-43. Winchester, David P., and James D. Cox. “Standards for Breast-Conservation Treatment,” CA-A Cancer Journal for Clinicians, 42(3):134-59, 1992. Woolf, B. “On Estimating the Relation Between Blood Group and Disease,” Annals of Human Genetics, 19:251-53, 1955. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the survival rates of patients receiving breast-conservation therapy versus mastectomy therapy, focusing on whether patients' survivability is affected by having treatment in single-center studies rather than multi-center studies. GAO found that: (1) the 5-year survival rates for breast cancer patients treated with breast-conservation therapy were similar to that of mastectomy in community medical practices; (2) patients that were treated at single-center facilities usually had slightly higher survivability rates than patients who received treatment at multi-center facilities; (3) on average, breast cancer patients appear to be at no appreciable risk by selecting breast-conservation therapy rather than mastectomy; (4) although patient survivability data were vulnerable to hidden selection bias, such bias was unlikely since the group of patients reviewed were homogeneous and adjustments were made for patients' health and demographic characteristics; and (5) although a minority of patients who voluntarily chose breast-conservation therapy over recommended mastectomy therapy could have achieved slightly better results with mastectomy, the differences were not statistically significant.
Education disburses grant and loan payments by electronic funds transfer and processes these payments in GAPS. This disbursement process relies extensively on various computer systems application controls, or edit checks, to help ensure the propriety of these payments. Because these edit checks are important to the Department’s controls over grant and loan payments, we focused our work on assessing whether existing edit checks were working effectively and whether additional edit checks and controls are needed. Using computerized matching techniques, we tested the $181.4 billion of grant and loan payments processed through GAPS to identify potentially improper payments that could have resulted from either ineffective edit checks or the lack of necessary edit checks. Following are examples of improper and potentially improper payments we identified through our various tests. We found that Education’s student aid application processing system lacks an automated edit check that would identify students that were much older than expected. To identify improper payments that may have resulted from the absence of this edit check, we initially identified institutions that disbursed Pell Grants over multiple years to students 70 years of age or older. We chose to test for students of this age because we did not expect large numbers of older students to be enrolled in a degree program and thus eligible for student aid. Based on the initial results of our test of students 70 years of age or older and because of the problems we identified in the past, we decided to expand our review of schools that had disproportionately high numbers of older students to include recipients 50 years of age or older. Our Office of Special Investigations, in coordination with Education’s IG, investigated four schools that disbursed as much as $3.4 million in Pell Grants to ineligible students. These students were ineligible because their primary course of study was English as a second language, and they were not seeking a degree or determined to need English language instruction in order to utilize their existing knowledge and skills. The investigation disclosed that at least one of the schools generated fraudulent student admissions documents to create the appearance that students who were not in fact seeking a degree were participating in a degree program. We previously investigated two of these four schools in 1993 and found the similar activities, including the falsification of student records to support the schools’ eligibility to participate in the Pell Grant program. We have also identified three other schools that disbursed about $500,000 in Pell Grants that warrant additional review. These schools have unusually high concentrations of older, foreign-born students who are more likely to be studying English as a second language. We will formally refer the information related to these three schools, as well as the results of our investigations of the four schools discussed above, to Education’s IG for appropriate follow-up. During our testing, we also identified an additional 708 schools that disbursed Pell Grants to students 70 years or older totaling $4.5 million. We provided lists of these schools to the Department for additional analysis. Based on its analysis, Education has determined that two of these schools also exhibited disbursement patterns similar to the schools above that disbursed Pell Grants to ineligible students for the study of English as a second language. For these two schools, the Department plans to perform full program reviews later this year to assess their eligibility to continue to participate in the Pell Grant program. We are currently expanding our review in this area to determine whether additional schools may be inappropriately disbursing Pell Grants. Education told us that they have performed ad hoc reviews in the past to identify Pell Grants disbursed to ineligible students and have recovered some improper payments as a result of these reviews. Based on the results of our analyses, Education has decided to implement a new edit check for students’ 85 years or older beginning with the 2002–2003 academic year. If the birth date on a student’s application indicates the student is 85 years of age or older, the application processing system will identify the applicant and Education will forward the information to the school for follow-up. Education also said it conducts other limited procedures – including the use of Single Audit results – to assess schools’ determination of student eligibility. However, these procedures are not specifically designed to identify schools that are knowingly disbursing Pell Grants to students who are not eligible to participate in the program. Regarding the edit check that Education plans to implement in the 2002 – 2003, we believe the age limit is too high and will exclude many potential problems. Using Education’s criteria, we would have identified less than 1 percent of the students that were ineligible to receive as much as $3.4 million in Pell Grants. Further, given the recurring nature of improper Pell Grant disbursements, we feel it is incumbent upon Education to implement a formal, routine process to identify and investigate questionable disbursement patterns such as those I have discussed. Another key control, which was not in effect during the time of our review, was a match of student social security numbers (SSN) with Social Security Administration (SSA) death files. As a result, we had SSA compare loan and grant recipient data in Education’s systems with SSA’s death records. SSA identified over 900 instances, totaling $2.7 million, in which the student SSN was listed in SSA’s death records. We are currently in the process of reviewing additional data from Education that they believe supports the propriety of many of these payments. Beginning with the 2000-2001 award year (subsequent to our review period), as part of the application process, Education started matching student SSNs with SSA death records to identify potentially improper payments. We also performed several additional tests of Education’s existing edit checks to identify potentially improper grant and loan payments that may not have been detected by these checks. These tests included searches for a single SSN associated with two or more dates of birth, grants to recipients in excess of statutory limits, and searches for invalid SSNs. Based on these tests, we initially identified $43.6 million in potentially improper payments, for which Education has to date been able to provide sufficient supporting documentation for $18.7 million or about 42 percent of these payments. Education is in the process of researching the remaining $24.9 million of potentially improper payments. Our conclusion as to the effectiveness of Education’s existing edit checks will depend on the resolution of the remaining $24.9 million currently being researched by the Department. Education’s third party draft system was originally set up to efficiently process checks to pay non-Education employees who review grant applications, known as field readers. However, in May 1999, Education’s policy manual expanded the use of third party drafts to pay for other expenses including employee local travel reimbursements, fuel and maintenance for government vehicles, and other small purchases. Third party drafts could be issued for up to $10,000 - the limitation printed on the face of each draft. Executive Officers determine who has signature authority within their units. From May 1998 through September 2000, Education’s payments by third party draft totaled $55 million. During our analysis of the third party draft payment process, we identified several internal control weaknesses, including inadequate computer systems application controls, poor segregation of duties, and inadequate audit trails. Specifically, as we discussed in our April 3, 2001, testimony, Education (1) circumvented a system’s application control designed to avoid duplicate payments by adding a suffix to the invoice/voucher number when the system indicates that an invoice/voucher number has already been used; (2) allowed 21 of the 49 Education employees who could issue third party drafts to do so without involving anyone else; and (3) lacked adequate audit trails, such as a trigger log, to identify changes made to the list of approved vendors. Based on these weaknesses and information gathered from Education IG reports, we designed tests to identify potentially improper payments in this area. These tests included various automated searches of Education’s disbursement data, as well as manual reviews of about 38,000 third party draft transactions. Based on these analyses thus far, we have identified 268 instances in which multiple third party drafts were issued to the same payee with the same invoice number or on the same day, totaling about $8.9 million. Education officials are in the process of researching and providing supporting documentation for these transactions, which we will then test for overpayments and duplicate payments. In addition to analyzing the support for the potentially improper payments I have described, we plan to perform various computerized sorts and searches to identify additional anomalies, including a thorough review of third party drafts issued by individuals with complete control over the payment process to determine whether questionable transactions occurred that require additional research to assess their propriety. Following the April 3, 2001 hearing, Education took action to eliminate the use of third party drafts. The Department’s Third Party Draft Program’s Closing Procedures, issued in May, 2001, indicates that Treasury payments will replace third party drafts. In addition, Education officials acknowledged that the Department lacks adequate trigger logs and told us that they are currently developing and implementing more-effective trigger logs. Even though Education is no longer issuing third party drafts, this is an important improvement because the same system that produced those payments also produces Treasury payments, which are replacing third party drafts. Government purchase cards are available to federal agencies under a General Services Administration (GSA) contract and, according to instructions from the Department of Treasury, should generally be used for small purchases up to $25,000. Treasury requires agencies to establish approved uses and limitations on the types of purchases and dollar amounts. According to a departmental directive, Education’s policy is to use government purchase cards for authorized purchases of expendable goods and services, such as supplies not available from the GSA Customer Supply Center. From May 1998 through September 2000, the time frame for our review, Education’s payments by government purchase card totaled over $22 million. During our analysis of the purchase card payment process, we identified internal control weaknesses, including inadequate computer systems application controls, lack of supervisory review, and improper authorization of transactions. Specifically, we found that Education (1) did not use management reports available from Bank of America, Education’s contractor for government purchase cards, to monitor purchases; (2) had serious deficiencies in its process for reviewing and approving purchase card transactions; and (3) allowed employees to execute transactions beyond the scope of their authority. Inadequate control over these expenditures, combined with the inherent risk of fraud and abuse associated with purchase card purchases, provides Education employees the opportunity to make unauthorized purchases without detection. Based on these weaknesses and information gathered from Education IG reports, we designed tests to identify potentially improper payments made with government purchase cards. As with third party drafts, we performed various automated searches of purchase card disbursement data. Specifically, we sorted the data by principal office, cardholder, vendor, and Merchant Category Code (MCC) to identify unusual transactions and patterns. We supplemented these computerized searches with manual reviews of the over 35,000 purchase card transactions. We also selected 5 months of cardholders’ statements, a total of 903 statements, to review for certain attributes, including approving official’s signature. Out of the 903 purchase cardholders’ monthly statements totaling $4 million that we reviewed, 338 statements, totaling about $1.8 million, were not properly approved. Because this key control–supervisory review and approval–was not operating, we requested supporting documentation for these transactions from the Department. Education has provided invoices and other support related to most of the transactions included in these monthly statements. The Department believes this support will validate these transactions. We are currently reviewing the support to confirm this assessment. We provided Education with an additional 833 transactions, totaling about $362,000, in which the payee appeared to be an unusual vendor to be engaging in commerce with the Department. For example, we found one instance, that is now being investigated by our Office of Special Investigations, in which a cardholder made several purchases from two pornographic Internet sites. The names of these sites should have aroused suspicions when they appeared on the employee’s monthly credit card bill. We also found another instance in which Education paid for an employee to take a training course completely unrelated to activities of the Department. In addition, we gave Education a list of 124 instances, totaling about $600,000, in which it appears that cardholders may have split their purchases into multiple transactions to bypass pre-established single-purchase spending limits. Education is currently researching these transactions. In our April 2001 testimony, we also reported that individual cardholders’ monthly purchase limits were as high as $300,000. Education, in response to a letter from this subcommittee dated April 19, 2001, said the Department has taken action to improve internal controls related to the use of the government purchase card. Education has lowered the maximum monthly spending limit to $30,000, revoked some purchase cards, and lowered other cardholders’ single purchase and total monthly purchase limits. While these are important improvements, they will not prevent cardholders from continuing to split large purchases in order to circumvent single purchase limits. In addition, they do not address the issue of lax approval practices. To address these issues, Education needs to reiterate and strengthen its policy of requiring review and approval of cardholders’ monthly statements, including a review for potentially split purchases. In addition, Education should institute a mechanism to periodically monitor purchase card activity to ensure that proper review and approval is occurring and that split purchases are not. Further, since MCCs can be effectively used to prevent purchases from certain types of vendors, Education should expand its list of MCCs that are being blocked to further help prevent improper payments. In closing, Mr. Chairman, I want to emphasize the importance of Education management’s giving top priority to improving internal control to minimize the agency’s vulnerability to improper payments. The Secretary’s actions to establish a management improvement team to address the Department’s serious management problems, and to respond to issues related to using third party drafts and purchase cards, are important first steps. However, there are other important steps that we recommend be taken to address the Department’s control problems. The Department needs to (1) establish appropriate edit checks to identify unusual grant and loan disbursement patterns, (2) implement a formal routine process to investigate unusual disbursement patterns identified by the edit checks, (3) reiterate to all employees established policies regarding the appropriate use of purchase cards, (4) strengthen the process of reviewing and approving purchase card transactions, focusing on identifying split purchases and other inappropriate transactions, and (5) expand the use of MCCs to block transactions with certain vendors. Further, the Department needs to continue to focus on researching and resolving the potential improper payments that we have identified thus far. This will help provide a clear picture of any fraud or abuse that has occurred. Once the improper activities are identified, immediate action can be taken to terminate them. We discussed our recommendations with Department officials and they generally concurred. We may have additional recommendations after we complete our work later this fall. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. For information about this statement, please contact Linda Calbom, Director, Financial Management and Assurance, at (202) 512-9508 or at [email protected]. Individuals making key contributions to this statement include Dan Blair, Don Campbell, Anh Dang, Bonnie Derby, David Engstrom, Bill Hamel, Kelly Lehr, Sharon Loftin, Bridgette Lennon, Diane Morris, Andy O’ Connell, Russell Rowe, Peggy Smith, Brooke Whittaker, and Doris Yanger. (190024)
GAO and the Department of Education's Office of Inspector General have issued many reports in recent years on the Department's financial management problems, including internal control weaknesses that put the Department at risk for waste, fraud, abuse, and mismanagement. In an April 2001 assessment of the internal control over Education's payment processes and the associated risks for improper payments, GAO identified four broad categories of internal control weaknesses: poor segregation of duties, lack of supervisory review, inadequate audit trails, and inadequate computer systems' applications controls. This testimony discusses how these weaknesses make Education vulnerable to improper payments in grant and loan payments, third party drafts, and government purchase card purchases. GAO found that Education's student aid application processing system for grants and loans lacks an automated edit check that would identify potentially improper payments from students who were much older than expected, a single social security number associated with two or more dates of birth, grants to recipients in excess of statutory limits, and searches for invalid social security numbers. GAO also found problems with Education's third party draft system. Specifically, Education (1) circumvented a system's application control designed to avoid duplicate payments by adding a suffix to the invoice/voucher number when the system indicates that an invoice/voucher number has already been used; (2) allowed 21 of the 49 Education employees who could issue third party drafts to do so without involving anyone else; and (3) lacked adequate audit trails, such as a trigger log, to identify changes made to the list of approved vendors. GAO also found shortcomings with Education's internal controls over government purchase cards.
Charter schools are public schools that operate under charters (or contracts) specifying the terms by which they may operate. In general, they are established under state law, charge no tuition, and are nonsectarian. State charter school laws and policies vary widely regarding the degree of autonomy the schools have, the number of charter schools that may be established, the qualifications of charter school applicants and teachers, and the accountability criteria that charter schools must meet. As of September 1997, 29 states and the District of Columbia had enacted laws authorizing charter schools, according to the Center for Education Reform. In school year 1996-97, over 100,000 students were enrolled in nearly 500 charter schools in 16 states and the District of Columbia. Most charter schools are newly created; about 33 percent were converted from existing public schools, and about 11 percent were converted from existing private schools, according to the Department of Education.Figure 1 shows the states with charter school laws as of September 1997 and the number of charter schools operating in the 1996-97 school year by state. Both the Congress and the administration have supported charter schools. For example, in amending ESEA in 1994, the Congress established a grant program to support the design and implementation of charter schools. In addition, under the Goals 2000: Educate America Act, states may use federal funds to promote charter schools. The administration proposed doubling the roughly $50 million made available under the new ESEA charter school grant program in fiscal year 1997 to $100 million for fiscal year 1998; the Congress ultimately increased funding for the program to $80 million. Finally, in his 1997 State of the Union Address, the President called for the establishment of 3,000 charter schools nationwide by the next century. To explore the effects of education reform efforts, in January 1997, the Congress began holding hearings in Washington, D.C., and around the country. The Congress has focused on developing charter schools, among other reform efforts. Charter school operators and others at the hearings raised concerns about charter schools’ receiving the share of federal title I and IDEA grant funds for which they are eligible. Recent research conducted by the Department of Education and the Hudson Institute raised similar concerns. Although dozens of financial aid programs exist for public elementary and secondary schools, two programs, title I and IDEA, are by far the largest federal programs. Title I is the largest federal elementary and secondary education aid program. The Department of Education administers title I, which received about $7.4 billion in federal funding in fiscal year 1998. The program provides grants to school districts—or LEAs, as defined in federal statute and regulations—to help them educate disadvantaged children—those with low academic achievement attending schools serving high-poverty areas. Nationwide, the Department makes about $800 available on average to LEAs for each child counted in the title I allocation formula. Under title I, the federal government awards grants to LEAs through state educational agencies (SEA), which administer the grants and distribute the funds to LEAs. About 90 percent of the funds the Congress appropriates are distributed as basic grants; about 10 percent are distributed as concentration grants, awarded to LEAs serving relatively higher numbers or percentages of children from low-income families. To receive title I funds, SEAs must submit title I plans to the Department of Education. SEAs may submit these plans to Education separately or as part of a consolidated plan incorporating several federal education programs. Title I plans must explain how a SEA will operate its title I programs and demonstrate that a state has established or is developing state content and student performance standards as well as describe assessment systems used to measure schools’ progress in meeting state standards. Moreover, state plans must describe how the SEA will help each LEA and school affected by the title I plan develop the capacity to comply with state standards. Once the plan is approved, the SEA is eligible to receive title I funds, and the plan remains in effect as long as a state participates in the program. SEAs must periodically update plans, however, to reflect substantive changes or as required by the Department of Education. To be eligible for title I funds received by the state, federal statutory and regulatory guidelines require LEAs to meet minimum poverty thresholds. To be eligible for basic grants, an LEA generally must have enrolled at least 10 children from low-income families, and low-income children must constitute more than 2 percent of its school-aged population. To be eligible for concentration grants, LEAs generally must have enrolled more than 6,500 children from low-income families, or more than 15 percent of their students must be from low-income families. LEAs that receive title I funds and have more than one school in their district have some discretion in allocating these funds to individual schools. LEAs must rank their schools according to the percentage of children from low-income families enrolled in each school. LEAs must use the same poverty measure in ranking all their schools, but the title I statute provides four measures from which LEAs may choose. LEAs must serve, in order of poverty, their schools that have more than 75 percent of their students from low-income families. After serving these schools, LEAs may then serve additional title I-eligible schools, in order of poverty, with remaining funds. LEAs do not have to allocate the same per poverty student amount to each school in the district. LEAs must allocate, however, a higher or equal per poverty student amount to schools with higher poverty rates than they allocate to schools with lower poverty rates. (See app. II for more details about the title I program.) The IDEA part B program is a federal grant program for helping states pay the costs of providing a free appropriate public education to all eligible children with disabilities between the ages of 3 and 21 living in the state, depending on state law or practice. The act requires, among other things, that states make such education available to all eligible children with disabilities in the least restrictive environment. The Congress appropriated approximately $4.2 billion for the program for fiscal year 1998. According to Department of Education officials, these funds are expected to provide, on average, about $639 per student for services provided to the nearly 5,951,000 eligible students aged 3 through 21, plus an additional $650 per student to provide services for approximately 575,800 eligible preschool children aged 3 through 5. Under the current formula, the Department of Education annually allocates funds to SEAs on the basis of their reported numbers of eligible children receiving special education and related services for the preceding fiscal year, the national average per pupil expenditure, and the amount the Congress appropriates for the program. The most funding that a state may receive for any fiscal year is capped at 40 percent of the national average per pupil expenditure multiplied by the number of eligible children with disabilities in the state who receive special education and related services. The IDEA Amendments of 1997 provide that each state will receive its prior fiscal year allocation when the Congress appropriates more than $4,924,672,200 for IDEA part B; 85 percent of the remaining funds will be allocated to states on the basis of each state’s relative population of children aged 3 through 21 who are the same age as children with disabilities for whom the state ensures the availability of a free appropriate public education; the remaining 15 percent of these funds will be allocated on the basis of each state’s relative population of these children living in poverty. To receive funds, a state must demonstrate to the satisfaction of the Secretary of Education that it has in effect policies and procedures to ensure that it meets certain specified conditions. The conditions that states must meet include, among others, the availability of a free appropriate public education to all children with disabilities living in the state. In reauthorizing IDEA, the Congress added additional provisions specifically for charter schools. In particular, LEAs must now demonstrate to their SEAs that they serve children with disabilities attending charter schools in the same way they serve children with disabilities in their other schools and that they provide IDEA part B funds to charter schools in the same way they do to their other schools. Under the current formula, states must distribute at least 75 percent of the IDEA funds they receive from the Department to LEAs and may reserve the rest for state-level activities. In general, SEAs allocate IDEA funds to eligible LEAs on the basis of their relative share of their state’s total number of eligible children receiving special education and related services. When the Congress appropriates more than $4,924,672,200 for IDEA part B, allocations to LEAs are modified as allocations to states are modified according to the 1997 IDEA amendments. States may allocate IDEA funds to LEAs or other agencies included in the act’s definition of LEAs. These other agencies include, for example, regional educational service agencies authorized by state law to develop, manage, and provide services or programs to LEAs. Some states allocate IDEA funds to regional educational service agencies for providing special education and related services to children with disabilities enrolled in the schools of one or more LEAs, including charter schools. Other states allocate IDEA funds directly to school districts, which then develop, manage, and provide their own such services to children with disabilities. (See app. II for more details on the IDEA program.) States use several arrangements to provide funds to charter schools. In general, states allocate title I and IDEA funds or IDEA-funded special education services to charter schools using one of three approaches. The seven states in our review used all three. The first approach involves states allocating title I and IDEA funds directly to charter schools; Massachusetts and Minnesota use this approach. The second approach involves states allocating title I and IDEA funds to charter schools through existing parent LEAs; California and Colorado use this approach. Charter schools, along with other public schools in the district, then receive their share of funds or services from their parent LEAs. The third approach for allocating funds to charter schools involves a mixture of the first and second approaches. In general, a charter school in a state using this approach receives federal funds directly from the SEA—and thus is treated as an LEA—if the school was chartered by a state agency or through a parent LEA—if the school was chartered by a district or substate agency. States using this model include Arizona, Michigan, and Texas.(App. III provides more details on the seven states’ federal funding allocation procedures.) Under these three approaches, individual charter schools are generally allocated funds on the basis of their treatment as either (1) an independent LEA or school district (called the independent model) or as (2) a dependent of an LEA—that is, as a public school part of an existing school district (called the dependent model). Throughout our report, we refer to these two models of allocating funds to charter schools as the independent or the dependent model, respectively. Under title I and IDEA, the Department of Education allocates funds to SEAs, which then allocate funds to LEAs. LEAs, in turn, allocate funds to individual schools in their districts. To be eligible for title I funds, LEAs—including charter schools operating under the independent model—must meet the minimum statutory eligibility criteria of enrolling at least 10 children from low-income families with these children constituting more than 2 percent of their school-aged population. LEAs that have more than one school—including charter schools operating under the dependent model—allocate title I funds to their schools. The federal statute and regulations specify complex criteria and conditions that LEAs use in deciding how to allocate funds to their schools, which results in shifting title I funds received by LEAs to individual schools with relatively higher percentages of students from low-income families. An individual school that is part of an LEA in a high-poverty area therefore might have to have enrolled a higher percentage of low-income children to receive title I funds than it would need if the school were treated as an independent LEA. In this case, a charter school that would have received title I funds as an independent LEA may not receive title I funds under the dependent model because other schools in the LEA might serve higher percentages of low-income children. The benefits that individual schools may receive from IDEA funds vary by state. Two states in our survey—California and Michigan—allocate IDEA funds to regional educational service agencies. In California, children with disabilities enrolled in charter schools receive special education services through its regional agencies known as “special education local plan areas.” Michigan’s regional educational agencies may help charter schools by providing special education services to children with disabilities enrolled in the charter school or provide funds to reimburse charter schools for eligible expenses. Other states in our survey operated somewhat differently. For example, Colorado allocates IDEA funds to LEAs. Charter schools in that state negotiate individually with their parent LEAs the terms under which the school will receive IDEA funds or special education and related services for children with disabilities enrolled in the school. Arizona, Massachusetts, Minnesota, and Texas, on the other hand, allocate IDEA funds directly to charter schools in those cases where the states consider charter schools independent LEAs. In Arizona and Texas, charter schools considered dependent members of a parent LEA receive IDEA funds or special education services through the parent LEA. Although no centralized repository of data exists for determining the extent to which charter schools have received federal funds nationwide, our research suggests that charter schools in the seven states we surveyed have not been systematically denied access to title I and IDEA funds. Despite the concerns about funding issues raised during the 1997 congressional hearings as well as in response to studies conducted by the Hudson Institute, our survey revealed that most charter school operators who applied for title I and IDEA funds received them. Moreover, most charter school operators who expressed an opinion told us that they believed that these federal funds are fairly allocated to charter schools. Overall, about two-fifths of the charter schools we surveyed received title I funds for the 1996-97 school year. Of our survey respondents, slightly more than one-third of charter schools operating under the independent model and almost one-half of the schools operating under the dependent model received title I funds. Table 1 shows the number of charter schools surveyed that received title I funds by funding model. About two-fifths of the charter schools we surveyed did not apply for title I funds. Charter school officials who did not apply cited reasons such as (1) a lack of time to do so, (2) their school was ineligible for funds and therefore did not apply, or (3) they found that applying for these funds would cost more than the funding would provide. Of those schools that applied for title I funds, two-thirds, or 16 of 25, reported receiving funds. Title I funding for these schools ranged from $96 to $941 per poverty student; the average school value was $466 per poverty student, and the median value was $413. The difference in per student funding relates to the allocation formulas, which account for the number and proportion of low-income children in the school, district, and county. Title I funds received by these schools represented between 0.5 and 10.0 percent of their total operating budgets. For all but four of these schools, funds received represented 5 percent or less of the schools’ total operating budgets. Regarding the IDEA program, slightly more than one-half of our survey respondents received funds or IDEA-funded special education services. Of all charter schools surveyed, two-fifths operating under the independent model received funds or IDEA-funded special education services; three- quarters of those operating under the dependent model received funds or services. Table 2 shows the number of charter schools surveyed receiving IDEA funds or IDEA-funded special education services by funding model. Overall, about a third of the charter schools we surveyed did not apply for IDEA funds or IDEA-funded special education services. Charter school officials who did not apply cited reasons similar to those who did not apply for title I funds such as (1) a lack of time to do so, (2) they were not eligible for funds, (3) they did not know about the availability of IDEA funds, or (4) they found that applying for these funds would cost more than the funding would provide. Four-fifths of the charter school officials who told us that they applied for IDEA funds or IDEA-funded special education services reported that they received funds or services for the 1996-97 school year. For schools that obtained IDEA funds, rather than services, amounts received ranged from $30 to $1,208 per eligible student; the average school value was $421 per eligible student, and the median value was $206. IDEA funds received by schools represented between 0.08 and 2.50 percent of their total operating budgets. Regardless of funding model, two-thirds of the charter school operators expressing an opinion believed that they received a fair share of title I and IDEA funding. About one-fifth of the charter school operators we surveyed had no opinion or did not answer the question. (See tables 3 and 4.) Regarding IDEA funding or IDEA-funded special education services, however, about as many survey respondents under the independent funding model believed that they received a fair share as believed otherwise. For charter schools under the dependent model, on the other hand, about four times as many survey respondents believed that their schools received a fair share of IDEA funds or IDEA-funded special education services than believed otherwise. (See table 4.) According to our interviews with charter school operators we surveyed, charter schools do not appear disadvantaged in accessing federal funds. Nonetheless, these operators, as well as state officials, technical assistance providers, and studies conducted by the Hudson Institute and others have identified barriers that have hindered charter schools in accessing title I and IDEA funds. Reported barriers include (1) difficulties in establishing program eligibility, (2) workload demands that prohibited schools from pursuing program funds or made doing so too costly, (3) charter school operators’ and district and state administrators’ lack of program and administrative experience, and (4) ineffective working relationships with state or local program administrators. Charter school officials we spoke with reported barriers to establishing their eligibility for federal funds, especially regarding the title I program. A variety of factors caused these barriers, including (1) a lack of prior year’s enrollment data, (2) problems collecting student eligibility data, and (3) the timing involved for a school’s charter being issued and deadlines for submitting student eligibility and enrollment data. These barriers particularly troubled newly created charter schools. Charter schools converted from traditional public schools generally did not have these problems when current enrollment was at or near full capacity and title I eligibility had already been established. In its July 1997 report, the Hudson Institute noted that states typically allotted title I funds to schools on the basis of the previous year’s enrollment of title I-eligible students, which resulted in “leaving start-up charters completely stranded for their first year.” In our survey of charter school officials, three officials told us that because they had no prior year’s enrollment or student eligibility data, state guidelines made their schools ineligible for federal funds. Two of the three respondents that had this problem were officials of newly created schools; the third respondent represented a charter school that had been converted from a private institution. Department of Education officials told us that they believe most of the problems “start-up” charter schools had in accessing federal program funds were due to not having such enrollment data to submit to state officials. Other such start-up eligibility problems also presented some barriers to schools. For example, some officials noted that their schools are incrementally increasing the number of grades served as the original student body progresses. One school official told us that, while the school now serves grades 9 and 10, it will eventually serve grades 9 through 12. In addition, officials we surveyed at other schools were implementing a similar growth strategy. In these cases, a 1-year lag in reported enrollment data—reflecting past rather than current enrollment—may significantly affect the amount of federal funding for which a school may be eligible. For example, one charter school official we spoke with told us that next year she will receive title I funds on the basis of this year’s enrollment of about 100 students. She anticipates, however, that enrollment will increase almost 50 percent next year and that the school will be eligible for additional title I funding for about 40 newly enrolled students. But because of the time lag in reporting data, the school will have to wait until the following year for the additional funds. Over time, as enrollment stabilizes, these problems will lessen for schools. In addition, charter school officials reported difficulty in collecting student eligibility data required to receive title I funds. In some states, school officials must collect data on students’ family incomes to establish eligibility for federal funds. Some officials told us that because of privacy concerns, some families hesitate to return surveys sent home with students that ask for household income levels. One official told us that he believed parents may not understand that such data are used to establish the school’s eligibility for federal grant funds. In another case, a charter school official told us that verifying student eligibility data was a barrier in accessing funds because the process was time consuming. In this case, charter school officials had to manually match their student enrollment records with state and local Temporary Assistance to Needy Families (TANF) records to verify student eligibility. The business administrator for the school told us that it took him and another staff person approximately 2 full days to manually match the records for the approximately 1,000 students enrolled in his charter school. Another charter school official told us that timing issues prevented her from accessing federal funds: She said that her school’s charter was approved after the deadline had passed for the state allocation of title I funds to LEAs. Even though researchers at the Hudson Institute and the Department of Education found that financing issues were a significant concern for charter schools, several charter school officials told us that the time and cost involved in accessing federal funds and complying with program requirements exceeded the benefits that could be obtained; therefore, they did not pursue these funds. Regarding federal funds, the Hudson Institute noted, “ schools themselves are seldom equipped—in human terms—to maximize their aid.” In our survey of charter schools, several school officials emphasized that they had little time and resources to devote to accessing title I and IDEA funds given their other administrative and educational responsibilities. These officials often played multiple roles at their schools, including principal, office manager, nurse, and janitor. One operator told us that even if all he had to do was to sign on a dotted line and stuff an envelope, he would not have time to do so. Another said that if she receives anything in the mail with the words “title I” on it, she throws it away because she has so little time to attend to such matters. Although a majority of the charter school operators who expressed an opinion in our survey believed that the title I and IDEA application processes were only somewhat or not at all difficult, some operators told us that, nonetheless, it was not worth their while to pursue these funds. Two operators, for example, told us that the amount of title I funds their schools would be eligible for was simply not worth the effort to obtain them. In addition, charter school officials in four states told us that IDEA program requirements were cumbersome and involved too much paperwork. We spoke with technical assistance providers and consultants who had worked with charter schools who said that charter school operators are often dedicated educators but lack business and administrative experience in general or experience with federal categorical programs in particular. They told us that such inexperience may possibly discourage individuals from pursuing federal funding available to their schools. In addition, according to the Hudson Institute, charter school operators were often unaccustomed to the business and administrative aspects of running a charter school and with filling out forms for state and federal categorical programs. Moreover, because charter schools represent new and additional responsibilities for districts and state agencies that oversee and administer federal programs, according to state and district officials, it has taken some time to develop new policies and procedures to accommodate charter schools; therefore, these policies and procedures may not have been available when charter schools were authorized. In our interviews with charter school operators, some cited their lack of experience with the title I and IDEA programs as a barrier to accessing these funds. One operator told us that she did not know that IDEA funds may be available to her school to help pay for the costs of educating the school’s students with disabilities. According to another operator, although the state had mailed her information and application materials for the title I program, the amount of information was overwhelming and appeared designed for large, traditional school districts and thus discouraged her from reviewing the materials and applying for funds. She told us that, eventually, her school accessed these funds because a friend who operated his own charter school convinced her that she was forgoing a significant amount of funding. According to other operators we spoke with, they found title I and IDEA application procedures difficult but that having completed the process once, they expected to encounter fewer difficulties when they applied for such funding again. One of our site visits revealed that a lack of established allocation policies and procedures created barriers for charter schools. For example, the business administrator at a charter school we visited told us that accessing funds required many visits and phone calls to district officials to understand the allocation processes and procedures as well as to negotiate a fair share of federal funding for his school. According to district officials we spoke with, because their school district had approved and issued several charters to individual schools with varying degrees of fiscal autonomy, working out allocation issues has taken some time. These officials noted that they have limited time and resources for developing new policies and procedures for charter schools, especially because the number of charter schools and their student populations constitute a small portion of their overall operations. In addition, some state officials said that charter schools presented them with new administrative responsibilities and that they had to reexamine title I laws and regulations to determine the extent of their administrative flexibility under the program. According to one state official, for example, she was uncertain whether a state could reserve title I funds from her state’s allotment specifically to provide funding for charter schools during their first year of operation. An education official in Arizona told us that because most charter schools in that state are considered independent districts, the state’s education department’s workload has significantly increased. He noted that for over 50 years, the department was used to working with about 200 traditional school districts. Now that Arizona has authorized about 200 charter schools, the department is essentially working with over 400 school districts. Therefore, the department has had to change its focus, which this official called “conceptually challenging.” The department is now spending proportionally more time with charter schools than with traditional school districts, according to this official. In adapting to these changes, the state education department has consequently changed and revised policies as it has gained experience in administering charter schools. As a result, he said, state application and allocation procedures for charter schools differ from procedures used only 1 year ago. Schools operating under the dependent funding model may face more barriers than schools operating under the independent funding model because the former schools must go through an intermediary—or school district—in accessing federal funds, rather than receive funds directly from the state. According to one charter school operator, her school’s parent LEA unfairly used its discretion in allocating funds to schools in its district. She said that all of the district’s federal title I funding went to one school. Even though state officials told her that it was within the LEA’s discretion to allocate funds the way it did, she believes that her charter school and other district public schools eligible for funds should have shared in at least some of the funding. According to another charter school operator, uncooperative district officials hindered her school’s accessing federal funds because the officials did not provide assistance in obtaining funding for her school. In the Department of Education’s 1997 report on charter schools, the Department found that charter schools’ relationships with local district staff, local boards, and state boards or departments varied widely. The Department noted that in conducting field visits to charter schools, it found examples of local district boards or superintendents playing an active role in initiating and supporting the development of charter schools. In other cases, however, it found that local district staff or boards resisted charter schools and the school developers often had to face intense or hostile discussions and negotiations. In some of these cases, according to the study, the relationship between the school and the district has remained sour; in others, such differences have dissipated over time. Charter school operators reported that outreach and technical assistance were key factors that helped them access federal funds. In addition, according to other operators, state and local program officials’ flexibility helped them access funds. Other factors cited by school officials include the use of consolidated program applications, use of computerized application forms and processes, and the ability to rely on sponsoring district offices for grants administration. Charter school officials most often cited receiving information about the availability of federal funds and the amount their schools would be eligible for as factors helping them in accessing title I and IDEA funds. Officials cited several sources from which they had obtained such information, including their own states’ departments of education and local school district officials. Receiving information about federal programs addresses the lack of awareness cited by some operators as a barrier. Moreover, receiving information on the possible funding amount for which a charter school may qualify enables operators to make better judgments about whether pursuing such funding is worth their time and effort and enables them to better prioritize their administrative responsibilities. Charter school officials also credited training and technical assistance provided by states, school districts, and consultants with helping them access federal funds. Charter school operators in Arizona were particularly pleased with the amount and availability of assistance that the state’s department of education offered them. They noted that the state informed them of funding opportunities and offered them technical assistance on many occasions. According to another survey respondent, being able to rely on his charter school’s parent LEA for federal grants administration relieved him of having to apply for and administer the grant funds, which helped his school access these funds. Finally, some respondents told us that their schools employed consultants to help in applying for federal and state funds, which enabled them to focus their time and effort elsewhere. A respondent in another state cited the use of consolidated applications as a help in accessing funds. As discussed earlier, SEAs may submit consolidated applications for several federal education programs. In turn, SEAs may also allow LEAs to submit one application for these same programs. One respondent told us that her state’s use of technology helped her access federal funds: Her state used the Internet to allow schools to obtain and submit title I applications. Many of the factors that helped or hindered charter schools in accessing federal funds, according to our work, had no relation to whether schools received their funds directly from the state or indirectly through a parent school district. For example, both independent and dependent charter schools can have difficulty demonstrating title I eligibility. Dependent charter schools required to submit student eligibility data to their parent LEAs may find it just as difficult to collect such data as independent charter schools, which must submit the same data to SEAs. Similarly, both independent and dependent charter school operators we interviewed frequently cited a lack of time and inexperience with administrative program requirements as barriers to accessing funds. One factor, however, that hindered dependent but not independent charter schools in accessing funds was the working relationship between a charter school and its sponsoring district. Because LEAs have some discretion in allocating title I funds to schools in their districts, an ineffective working relationship can hinder the allocation of funds to dependent charter schools. In addition, factors that helped charter schools access funds had no relation to the path that funding took. Both independent and dependent charter school officials cited that notification of program eligibility helped them in accessing funds. Although both independent and dependent charter schools also cited that training and technical assistance provided by states, local districts, or consultants were helpful, independent charter school operators more frequently said so. On the other hand, several charter school operators in California and Colorado—states that consider most charter schools dependent members of existing school districts— reported that receiving IDEA-funded special education services, rather than funds, from their local school districts helped them access federal funds. Several states and the Department of Education have begun initiatives to help charter schools access federal funds. Some states, for example, are revising or developing alternative allocation policies and procedures to better accommodate charter schools’ access to federal funds and providing training and technical assistance to charter school operators. The Department of Education recently issued guidance to states and school districts about allocations of title I funds to charter schools. The Department is also using funds provided to them under the ESEA Public Charter School Grant Program to study and support the establishment of charter schools. Some states in our review had developed or were devising strategies to support charter schools as part of their overall education reform efforts. Chief among these strategies were efforts designed to reduce barriers to charter schools’ demonstrating their eligibility for federal programs and addressing their inexperience with federal programs by offering training and technical assistance. Some states had used their administrative flexibility under the title I program to develop creative solutions to overcome some charter schools’ barriers to accessing federal funds. Some states, for example, have decided to allow charter schools to use comparable—and more easily obtainable—data to establish the income levels of students’ families. One state has developed a way to estimate a proxy for the number of title I-eligible students attending charter schools. This has allowed newly created charter schools in the state to demonstrate eligibility for title I funds without having a prior year’s enrollment history. Once these charter schools have established eligibility for title I funds, states have provided funds to these schools in their first year of operation. To do so, states have used their own title I administrative reserve funds and funds available to the SEA for reallocation to LEAs. In some cases, states have continually refined their allocation policies and procedures as the states and charter schools have gained more experience. For example, Arizona officials reported they have significantly changed their state’s title I allocation procedures for the third time in as many years to better accommodate charter schools in distributing federal funds. According to state officials, their policies and procedures have been evolving as the number of charter schools in the state has increased and as the state and charter schools have gained administrative experience. In developing their most recent allocation policies and procedures, Arizona officials reported they used the state’s “title I committee of practitioners.” This committee, required by federal statute, advises the state and reviews proposed or final state title I rules or regulations. By law, these committees consist of school district officials, administrators, teachers, parents, board of education members, pupil services personnel, and representatives of private school children. According to Arizona education officials, they added charter school representatives (a charter school teacher as well as a parent of a charter school student) to their state committee. The committee spent 6 to 8 months developing and considering alternative methods for allocating title I funds to charter schools before deciding on the current procedures. State officials said that as a result, they believe the state has developed a better approach to allocating title I funds to charter schools. These officials reported that adding charter school representation to the title I committee of practitioners was not only important for ensuring charter schools’ fair consideration in developing allocation procedures, but also underscored the state’s commitment to charter schools as a part of its overall education reform efforts. Of the seven states in our review, only Arizona had added charter school representation to its state title I committee of practitioners. Officials in other states in our review acknowledged that adding charter school representation to title I committees of practitioners was a practical approach for ensuring that charter schools’ needs were considered when developing or changing state regulations and procedures. Under the IDEA program, state advisory boards serve similar purposes as do title I committees of practitioners. In reauthorizing the program, the Congress required that states include charter school representatives on these boards. The title I program has no similar requirement. Besides developing alternative allocation policies and procedures, some states have actively sought to inform charter school operators about their possible eligibility for federal funding and have provided them with training and technical assistance in applying for and administering federal funds. For example, Minnesota and California officials reported they send the same information to charter school officials as to officials of traditional school districts. In addition, Colorado officials have developed guidance for charter school officials and have posted it on the Internet. Arizona officials have developed cross-programmatic teams of state department officials and assigned specific charter schools to each of the teams. In doing so, the state has provided charter schools with a single point of contact for obtaining information about and technical assistance for all federal and state programs. Although our study was not designed to compare states, Arizona appeared to be making the most comprehensive effort to help charter schools access federal funds. (Arizona also has, by far, more charter schools than any other state). Arizona state officials attributed the overwhelmingly positive responses we received from charter school officials there to the state’s extensive planning efforts and the technical assistance they provide. Arizona officials noted that planning was a difficult and time- consuming process yet crucial in carrying out its education reform initiatives. In applying to the Department of Education for title I funding, Arizona’s title I plan recognized that charter schools would require such training and technical assistance if all school children in the state were expected to attain the state’s academic standards and goals. In addition, other state officials recognized that charter schools require training and technical assistance to, among other things, access federal funding. A Massachusetts official told us that because charter schools there are brand-new school districts, most operators would need help in applying for funding and complying with program requirements. Although the state did not address its strategy for helping charter schools in its title I application to Education, this official reported that doing so would be appropriate because charter schools typically have little experience with federal programs. According to a Colorado charter school official, state title I applications and plans could also help charter schools access federal funds, even though charter schools in Colorado are authorized by and receive funding through traditional school districts. He noted that a state could demonstrate its commitment to charter schools as part of its overall education reform initiatives within its plan. By doing so, he said, the state would build the expectation that districts authorizing charter schools would serve eligible students enrolled in charter schools with available federal resources. He believed that such an effort would effectively address barriers faced by charter schools caused by ineffective working relationships with district officials. The Department of Education does not now require states to specifically address the plans they have developed to ensure that eligible students enrolled in charter schools are served by federal program resources. In providing guidance to states in preparing their title I applications and plans, however, the Department told states that their plans could provide a framework for demonstrating the use of federal program resources within the context of states’ school reform initiatives. In addition, the Department noted that state plans should provide information on serving children intended to benefit from federal programs. During our study, the Department of Education developed guidance to states and LEAs on allocating title I funds to charter schools. This guidance was completed and published in March 1998. The guidance clarifies that SEAs and LEAs must take all reasonable steps to ensure that charter schools receive their full title I allocations. The guidance strongly encourages SEAs and LEAs to be appropriately flexible in accommodating charter schools by, among other things, (1) allowing more convenient times for collecting eligibility data, (2) allowing substitution of comparable poverty measures when appropriate, and (3) using available reallocation funds to serve new charter schools unable to demonstrate eligibility in time for initial funding allocations. In creating the Public Charter School Grant program under ESEA, the Congress provided funding to the Department of Education for financial assistance for designing and initially implementing public charter schools nationwide and for evaluating the effects of such schools, including their effects on students, student achievement, staff, and parents. Under the national activities provision of the statute, the Department may reserve up to 10 percent of the funds appropriated in any fiscal year for (1) peer review of applications for funding; (2) an evaluation of charter schools’ impact on student achievement; and (3) other activities designed to enhance the federal program activities’ success. According to Education officials, the Department has organized its national activities into three broad areas: (1) engaging the public, (2) research and development, and (3) outreach. The Department is engaging the public by sponsoring national, state, and regional meetings to improve the public’s charter school awareness. In November 1997, for example, the Department of Education sponsored a national conference for charter schools in Washington, D.C. The Department invited state officials and charter school operators from across the country and conducted many workshops on topics including federal categorical education grant programs, new requirements under the recently reauthorized IDEA, and information on the development and implementation of charter schools. The Department also funded the development of an Internet web site with general information on federal programs, charter school operational issues, a charter school resource directory and profiles of states that have authorized charter schools as well as profiles of individual charter schools. As already noted, the Department published in May 1997 the first-year results of its 4-year study of charter schools. As currently planned, the 4-year study will include an annual survey of all charter schools, a longitudinal study of a stratified random sample of 72 charter schools, and information collected from site visits and testing at 28 matched comparison schools. The Department is also conducting a charter school teacher fellowship program and three target research studies of charter schools involving the (1) education of children with disabilities, (2) school finance, and (3) assessment and accountability issues. The Department’s community outreach efforts include developing models for charter school operator leadership training programs, fostering cooperative relationships between charter schools and other public schools, and involving community organizations in operating charter schools. Barriers that charter schools face in accessing federal funds appear to have no relation to charter schools’ treatment as school districts or as members of school districts. Rather, other barriers, many of which have no relation to the path federal funds take, have more significantly affected charter schools’ ability to access title I and IDEA funds. These barriers include state systems that base funding allocations on the prior year’s enrollment and student eligibility data, the costs of accessing funds compared with the amounts that schools would receive, and the significant time constraints that prevent charter school operators from pursuing funds. Despite these barriers, most charter school operators who expressed an opinion believe that title I and IDEA funds are fairly allocated to charter schools. Although a variety of factors help charter schools access federal funds, according to our review, training and technical assistance appear to be critical to ensuring that charter school operators can access these funds. To this end, effective state and district planning would help ensure that federal program resources are directed to eligible students enrolled in charter schools. In addition, involving charter school operators or representatives in such planning efforts would provide additional assurance that charter schools and their students are appropriately considered. We recommend that the Secretary of Education direct states to include in their title I plans information on the strategies, activities, and resources that the SEAs will use to ensure that title I program resources serve eligible charter school students. We further recommend that the Secretary take steps necessary to direct states to include charter school representation on states’ title I committees of practitioners that advise states on implementing their title I program responsibilities. The Department of Education provided written comments on a draft of our report. (See app. IV.) The Department noted that our report helps to allay concerns about charter schools being systematically denied the opportunity to receive title I and IDEA funds. The Department also noted that in addition to its efforts discussed in our report, it is developing a “Charter School Operators’ Guide to the Department of Education” to provide charter school operators with information on its programs. The Department also commented that it has stressed the importance of involving charter schools in federal programs in its meetings with state, local, and school-level administrators and that it provides technical assistance to charter school operators, school districts, and states. In addition, the Department noted other of its efforts to help charter schools access federal funds. Regarding our recommendation that the Secretary direct states to address charter schools in their title I plans, the Department noted that it will include this requirement in its instructions to states for title I or other program or consolidated state plans when appropriate. Regarding the recommendation in our draft report that the Secretary direct states to include charter school representation on states’ title I committees of practitioners, the Department noted that while it strongly encourages states to include charter school representatives on these committees, it lacks the legal authority to require states to do so. We revised our recommendation to include the Secretary’s taking any additional steps that may be necessary to implement the recommendation. The Department also provided editorial and technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days after its issue date. At that time, we will send copies of this report to the appropriate House and Senate committees, the Secretary of Education, and other interested parties. We will also make copies available to others on request. If you have any questions about this report, please call me at (202) 512-7014 or Jeff Appel, senior evaluator, at (202) 512-9915. This report was prepared under the direction of Harriet C. Ganson, Assistant Director. Other major contributors to this report are listed in appendix V. Charter schools were also operating in Alaska, Delaware, the District of Columbia, Florida, Georgia, Hawaii, Illinois, Louisiana, New Mexico, and Wisconsin during the 1996-97 school year. Not applicable. This appendix augments the report’s information on statutory, application, and allocation requirements for both title I and the Individuals With Disabilities Education Act (IDEA) programs. Title I part A, the largest federal aid program for public elementary and secondary schools, provides funds to local educational agencies (LEA) through the states to enable schools to improve the academic achievement of eligible children either by providing additional or more intensive instruction or by upgrading the entire instructional program of the school.The federal government awards grants to state educational agencies (SEA), which administer and distribute these funds to LEAs. The statute authorizes three types of grants: basic grants, concentration grants, and targeted grants. Most LEAs nationwide receive basic grants; fewer LEAs receive concentration grants, which go to LEAs with high numbers or percentages of children from low-income families. The Department of Education determines title I part A allocations for each county in the country through a statutory formula based primarily on (1) the number of children aged 5 through 17 from low-income families using updated census poverty counts, (2) state per pupil public expenditures, and (3) the amount appropriated in a given fiscal year. Under the statute, 10 percent of title I LEA appropriations are distributed as concentration grants and the remainder as basic grants. In 1994, the Congress amended title I through the Improving America’s Schools Act to provide for targeted assistance grants. These grants would allocate more funds to LEAs with either more poor children or a greater percentage of such children. If the Congress appropriates funds for these grants in the future, eligible LEAs will receive the funds. Although the 1994 law stipulates that future title I funds appropriated in excess of those for fiscal year 1995 are to be distributed as targeted assistance grants, appropriation provisions in fiscal years 1996, 1997, and 1998 have overridden this stipulation. Thus, no targeted assistance grants have yet been distributed. The Department of Education allocated title I funds for school year 1997-98 to counties using updated poverty estimates provided by the Bureau of the Census. Each county’s allocation is determined by multiplying the number of children counted in the formula by 40 percent of the respective state’s per pupil education expenditure and accordingly reducing down to the amount appropriated. LEAs with high numbers or percentages of low-income students receive additional funds as concentration grants. Generally, awards to states cover July 1 to September 30 of the following year. These funds remain available at the state and local level for an additional fiscal year for obligation and expenditure. An SEA may reserve up to 1 percent or $400,000 (whichever is greater) from the state’s title I part A and certain other title I allocations for administration. In addition, an SEA must reserve 0.5 percent or at least $200,000 of these funds to carry out school improvement activities, including providing technical assistance, incentives, and other strategies to help title I schools and LEAs to meet state education standards. The rest of the funding goes to LEAs. Under the statute, the SEA suballocates county aggregate amounts determined by the Department of Education for basic and concentration grants (after adjusting for funds reserved for state administration and school improvement activities) to eligible LEAs in each county on the basis of their number of formula children. In states where the counties and LEAs are the same, the SEA adjusts the county allocation by reserving funds for administration and school improvement. In states where many LEAs overlap county boundaries, an SEA may apply to the Department for permission to allocate its total state basic grant allocation directly to LEAs regardless of individual county allocations. (Concentration grants do not have this provision.) Formula children include (1) children aged 5 through 17 from low-income families and (2) children who live in local institutions for neglected children. In determining the number of children from low-income families, the title I regulations require that an SEA use the best available data on the number of such children and the same measure of low-income statewide for basic and concentration grants. The SEA has broad discretion in choosing the poverty data it will use for determining LEA eligibility and for allocating funds. The poverty data used must further the purposes of title I part A by directing funds to high-poverty areas. An eligible LEA receives basic and concentration grant funds on the basis of its relative share of its county’s total formula population. The statute guarantees that an LEA eligible for basic grants receives a “hold-harmless” or minimum amount based on a percentage of the amount allocated to it in the preceding year. Beginning in school year 1997-98, the hold-harmless amount to which each LEA is entitled varies according to what percentage its formula count is of its school-aged population. LEAs in which the number of formula children make up 30 percent or more of their total 5- through 17-year-old population receive 95 percent of their prior year allocations; LEAs in which the number of formula children range between 15 and 30 percent of their school-aged population receive at least 90 percent of their prior year allocations; those in which the number of formula children is less than 15 percent receive 85 percent of their prior year allocations. Concentration grants in school year 1997-98 have no hold-harmless provisions. LEA officials have discretion in allocating title I funds to individual schools in their districts. Within LEAs, school officials target funds to schools with the greatest percentages of poor children. Although SEAs allocate basic and concentration grants to LEAs through different formulas, school districts combine these funds for use as a single program. An LEA must first rank individual schools by poverty, using the same poverty measure for all schools. Allowable measures include children aged 5 to 17 in poverty counted in the most recent census data approved by the Secretary of Education, children eligible for free and reduced-price lunches under the National Free School Lunch Act, children in families receiving assistance under Temporary Assistance for children eligible to receive medical assistance under the Medicaid a composite of any of the above measures. LEA officials must rank schools on the basis of percentage (not the number) of low-income children counted. All schools ranking above 75 percent must be served in order of poverty. After serving these schools, the LEA may serve lower ranked title I-eligible schools. The LEA may continue distributing funds using the districtwide ranking for all schools or rank the remaining areas by grade span groupings. If an LEA has no areas ranking above 75 percent, it may rank all schools by grade span. To the extent that it has schools overlapping grade spans, the LEA may include a school in the grade span in which it is most appropriate. An LEA may designate as eligible any school in which at least 35 percent of the children are from low-income families. It may use part A funds in a school that does not serve an eligible school attendance area if the percentage of children from low-income families enrolled in the school is equal to or greater than the percentage of such children in a participating school attendance area of the LEA. If remaining funds are not sufficient to fully fund the next ranked eligible school, the LEA may distribute these funds to the school if the LEA believes the amount will be sufficient to have an impact. An LEA with an enrollment of less than 1,000 students or with only one school per grade span does not have to allocate funds to areas or schools in rank order. If an LEA serves any areas or schools below a 35-percent poverty ranking, the LEA must allocate to all its participating schools or areas an amount per low-income child that is at least 125 percent of the LEA’s allocation per low-income child. To receive title I funds, an SEA must submit a state plan to the Department of Education for approval. Once approved, this plan remains in effect for as long as a state participates in title I part A, but the plan must be updated to reflect substantive changes. An SEA may choose to submit the plan separately or as part of a consolidated plan incorporating many of its federal education programs. A consolidated plan provides required information on its management of federal programs and how state and local reform efforts will serve the children intended to benefit from these programs. The consolidated plan is to provide a specific framework for determining how federal program resources, along with state and local resources, will be used in the context of the state’s own school reform plan and other reform initiatives. The consolidated state plan is intended to help the state focus on coordinating and integrating different programs as well as state and local activities to improve the academic achievement of all children. In addition, each state is expected to establish and maintain a state committee of title I practitioners required to be substantially involved in developing the state plan. The committee advises the state on the education of its disadvantaged children and on proposed state rules or regulations regarding title I. The committee consists of LEA representatives, title I administrators, teachers, parents, members of local boards of education, representatives of private school children, and public services personnel. Although charter school representatives may serve on the committee, no statute requires that they be included. Although no specific federal statute requires individual schools to file plans or apply for title I part A funds, LEAs must have on file with their SEAs an approved plan that includes descriptions of the general services to be provided; coordination activities with the LEAs’ regular programs of instruction; additional LEA assessments, if any, used to gauge program outcomes; and strategies to be used for providing professional development. States vary widely regarding requirements for plans. If the SEA plan for title I part A is part of a consolidated plan, the state may require LEAs to submit their title I part A plan as part of a consolidated application to the state. IDEA part B authorizes formula grants to states to help them make a free appropriate public education available to children with disabilities. Such children are those identified as having one or more physical or mental disabilities ranging from hearing impairments to learning disabilities who, because of these disabilities, need special education and related services. Under the current formula, Education allocates funds to SEAs annually on the basis of their reported number of eligible children receiving special education and related services for the preceding fiscal year, the national average per pupil expenditure, and the amount appropriated by the Congress for the program. Under this formula, states must distribute at least 75 percent of the IDEA funds they receive from the Department to LEAs and may reserve the rest for state-level activities. In general, SEAs allocate IDEA funds to eligible LEAs on the basis of their relative share of their state’s total number of eligible children receiving special education and related services. IDEA requires that SEAs, LEAs, or other state agencies identify and evaluate children with disabilities. Once a child is determined eligible for special education services, a written individualized education program (IEP) must be developed to establish learning goals for the child and to specify the instruction and services that an LEA will provide. An IEP team, including LEA representatives, regular and special education teachers, the parents of the child for whom the IEP is developed and, whenever appropriate, the child with a disability, develop the IEP. LEAs have responsibility for providing the child with the special education and related services specified by the IEP at no cost to the child’s parents. To receive funds, a state must demonstrate to the satisfaction of the Secretary of Education that it has in effect policies and procedures to ensure that it meets certain specified conditions. Such demonstration replaces the state IDEA plans required before the 1997 IDEA amendments. The conditions that states must meet include, among others, that a free appropriate public education is available to all children with disabilities residing in the state; all children with disabilities residing in the state are identified, located, and evaluated and a practical method is developed and implemented to determine which children with disabilities are receiving needed special education and related services; an IEP is developed, reviewed annually, and revised appropriately for each child with a disability; to the maximum extent appropriate, children with disabilities are educated with children who are not disabled, and special classes, separate schooling, or other removal of children with disabilities from the regular educational environment occurs only when the severity of the disability is such that education in regular classes with the use of supplementary aids and services cannot be achieved satisfactorily; and children with disabilities and their parents are afforded the procedural safeguards required by the act. A state that has on file with the Secretary of Education polices and procedures that demonstrate it meets any of the above conditions, including information filed before the effective date of the 1997 IDEA amendments, is deemed to have met such condition. In addition, as was the case before the 1997 IDEA amendments, states must establish state advisory panels on the education of children with disabilities. The 1997 IDEA amendments, however, specify that representatives of public charter schools must be included on these panels. Advisory panels consist of parents of children with disabilities, individuals with disabilities, teachers, state and local education officials, administrators of programs for children with disabilities, representatives of other state agencies, representatives of private schools and public charter schools, at least one representative concerned with the provision of transition services to children with disabilities, representatives of state juvenile and adult corrections agencies, and representatives of institutions of higher education that prepare special education and related services personnel. These panels advise the state on educating children with disabilities and comment on any proposed state rules or regulations regarding the education of these children. In general, states allocate title I and Individuals With Disabilities Education Act (IDEA) funds to charter schools on the basis of schools’ local educational agency (LEA) status. Charter schools considered LEAs typically receive funding directly from state educational agencies (SEA); charter schools considered to be part of an LEA receive funding through the LEA that granted the school its charter. The seven states in our review generally used one or both of these approaches. As this appendix shows, some states have variations to these funding schemes. Charter schools’ LEA status, for funding purposes, is generally determined by the states’ charter school laws or the agencies in the state that grant charters to schools. We obtained information for this appendix from state officials in each of the seven states in our review. Agencies authorized to grant charters to Arizona schools include the state board of education, the state board for charter schools, and local school boards. Schools that receive their charters from one of the state boards are considered independent LEAs regarding title I. Each LEA charter school determines its number of eligible students on the basis of student eligibility for the free lunch program. In the first year that charter schools operated, Arizona allocated these funds to charter school LEAs using projections of eligible student enrollments. The state adjusted the allocations when it received actual information on eligible students. Arizona abandoned the use of projections to fund first year schools because of objections resulting from reallocating funds using actual information. The state now reserves 1 percent of its federal title I allocation and uses these funds as well as any funds available for reallocation to serve new school districts (including charter school LEAs). The amount of title I funding granted per eligible student varies by the student’s county of residence. Schools chartered by a local district are considered member schools of that district and, until recently, had to receive title I funds through this district. On the basis of a recent ruling by the state’s Attorney General, Arizona has decided to allow schools chartered by local school districts to apply for title I funds directly to the state and will receive funds from the state if eligible. The state uses a similar approach for allocating IDEA funds. Schools chartered by one of the state boards are considered LEAs; schools chartered by a local school district are considered a member of that district. LEA charter schools determine the number of eligible students on the basis of students with an individualized education program (IEP). Schools may either apply to the state directly or apply jointly with other LEAs. Eligible expenses are billed to the state and reimbursed up to a school’s allocation. Schools chartered by a local district are considered member schools of that district and receive IDEA funds and services through the district. Agencies authorized to grant charters to California schools include local school boards and county boards of education. Almost all charter schools in California are considered dependent members of a parent school district. Title I funds are granted to districts on the basis of the number of children attending district schools from families receiving Temporary Assistance for Needy Families (TANF). Charter schools may receive part of their parent LEA’s allocation, depending on the number of eligible children attending the charter school and the poverty ranking process used by the LEA to distribute its allocation. Some California LEAs use TANF information to rank schools and allocate funds; other LEAs use free and reduced-price lunch eligibility data. Newly created charter schools receive no title I funds in their first year of operation. Charter schools that have converted from a public school receive title I funds on the basis of the prior information collected on eligible children attending the school. Parent LEAs may reserve title I funds from a charter school’s allotment to administer the title I part A program. The amount that an LEA may reserve for these purposes has no statutory limit. Most California public schools—including charter schools—are considered dependent members of a special education local plan area (SELPA) for IDEA purposes. The SEA has established SELPAs to serve as the LEAs. SELPAs receive IDEA funds and provide all necessary services required to serve children with disabilities. In most cases, eligible children attending charter schools receive services provided by the SELPA. In Colorado, only local school boards may grant charters to schools. All charter schools are considered dependent members of a parent school district. Title I funds have been granted to districts on the basis of 1990 census poverty data updated using Aid to Families With Dependent Children information. Future LEA allocations will use TANF and free and reduced-price lunch counts for LEAs to update the census poverty data used for distributing these funds. Districts then distribute title I funds to dependent schools on the basis of poverty rankings based primarily on free and reduced-priced lunch eligibility data. Newly created charter schools have not received title I funds in their first year of operation. Parent LEAs may reserve title I funds from allotments made to charter schools to administer the title I part A program. The amount that an LEA may reserve for these purposes has no statutory limit. Colorado’s charter schools are considered dependent members of a parent school district for IDEA purposes. Charter schools must negotiate with their parent districts the terms under which IDEA funds or services are provided to them. Charter schools’ particular arrangements, therefore, vary by school. In some cases, for example, the parent district receives IDEA funds and provides all necessary services for serving children with disabilities. In exchange, charter schools pay the parent district an amount equal to the average unfunded additional cost of serving children with disabilities. In other cases, charter schools and parent districts negotiate an amount of IDEA funds that will be used by a charter school for serving children with disabilities. The charter school, however, must absorb any costs in excess of the negotiated funding amounts. Any particular charter school and its parent district may have another unique arrangement. In Massachusetts, only the state board of education may grant charters to schools. All charter schools in Massachusetts are considered independent LEAs. Charter schools determine the number of eligible students for title I on the basis of enrolled students from families receiving TANF. Massachusetts uses title I funds available for reallocation to serve some charter schools in their first year. In addition, charter schools may agree to share funds with the school from which eligible students transferred. The SEA allocates to charter school LEAs the same amount it allocates to other LEAs in the same county. Charter schools in Massachusetts are also considered independent LEAs for IDEA purposes. The schools determine the number of eligible students on the basis of students with IEPs. Schools may either apply to the state directly or jointly with other LEAs. LEAs submit quarterly statements of eligible expenditures that they have incurred to the state. The schools receive their IDEA allocation in quarterly distributions. Agencies authorized to grant charters to Michigan schools include local school boards, intermediate school boards, community colleges, and state public universities. All charter schools in Michigan are considered independent LEAs for title I purposes. Charter schools determine the number of eligible students on the basis of student eligibility for the free and reduced-price lunch program. Michigan does not allocate title I funds to charter schools in their first year of operation. The amount per eligible student allocated to LEAs varies by the eligible student’s county of residence. Schools chartered by state public universities receive their funds through the sponsoring university, which acts as the school’s fiscal agent. Chartering authorities typically charge the charter school a fee equal to 3 percent of the funds granted. All public schools in Michigan are considered members of an intermediate school district (ISD) for IDEA purposes. The SEA has established a series of ISDs to serve as the LEAs for this purpose. Charter and other public schools apply for assistance to the ISD. They may either apply directly or join with another school or local school district to request funds or services. ISDs may help charter schools by providing services or funds to reimburse the school for eligible expenditures. In Minnesota, local school boards and public postsecondary institutions may grant charters to schools, subject to the approval of the state board of education. All charter schools in Minnesota are considered independent LEAs for title I purposes. They determine the number of eligible students on the basis of student eligibility for the free and reduced-price lunch program. Initially, the state did not allocate title I funds to charter schools during their first operating year. The state now reserves 1 percent of its title I allocation and uses this as well as any funds available for reallocation to serve new school districts (including charter school LEAs). The SEA uses a statewide per pupil average to allocate title I funds to LEAs. Charter schools in Minnesota are also considered independent LEAs for IDEA purposes. They determine the number of eligible students on the basis of students with IEPs. Schools may either apply to the state directly or jointly with other LEAs. Charter schools bill eligible expenses to the state, which reimburses the schools up to the schools’ allocations. Charter schools bill eligible expenses over and above IDEA allocations back to the LEA where students reside. In Texas, both state and local boards may authorize charters for newly created schools as well as charters for schools converting to charter schools. Local school boards create “campus” charters, which are member schools of the local school district and receive funds through them. State-authorized charter schools are termed “open enrollment” charter schools and considered independent LEAs for title I purposes. The state allocates title I funds to LEAs on the basis of census poverty counts for the LEA’s geographical attendance area. Because the students enrolling in charter schools come from the attendance areas of differing LEAs, the state proportionately redistributes the title I funds that would have been allocated to the school district where the charter school student lives. Charter schools receive this funding on the basis of enrollment, even in the first year of operation, and have been receiving title I funds for each year that they have been operating. State-authorized charter schools in Texas are also considered independent LEAs for IDEA purposes. These schools determine the number of eligible students on the basis of students with IEPs. Schools may either apply to the state directly or jointly with other LEAs. Charter schools bill eligible expenses to the state, which reimburses the schools up to the schools’ allocations. Schools chartered by a local district are considered member schools of that district and receive their funds through them. In adddition to those named above, the following individuals made important contributions to this report: Gene G. Kuehneman, Jr., senior economist; Benjamin F. Jordan, Jr., evaluator; Erin Krasik, intern; Susan Poling and Robert Crystal, attornies; Catherine Baltzell and Wayne Dow, methodologists; and Liz Williams, editor. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the way selected states allocate Elementary and Secondary Education Act Title I and Individuals With Disabilities Education Act (IDEA) funds to charter and other public schools; (2) factors that help and hinder charter schools in accessing Title I and IDEA funds; (3) whether factors that help or hinder charter schools to access federal funds vary by the funding path used in selected states; and (4) state and federal efforts designed to help charter schools access federal funds. GAO noted that: (1) in general, states either allocate funds to charter schools directly, considering them to be independent school districts or local educational agencies (LEA), or indirectly through a parent school district, considering a charter school to be a member of an existing school district; (2) overall, about two-fifths of the charter schools GAO surveyed received Title I funds, and slightly more than half of them received IDEA funds or IDEA-funded special education services; (3) most charter schools that did not receive funds did not apply for them; (4) two-thirds of charter school operators whom GAO surveyed and who expressed an opinion believed that they received a fair share of Title I or IDEA funds or IDEA-funded special education services; (5) a variety of barriers, according to GAO's review, have made it difficult for charter schools to access Title I and IDEA funds; (6) these barriers include a lack of enrollment and student eligibility data to submit to states before funding allocation decisions are made and the time required and costs involved in applying for such funds; (7) charter school operators most often cited training and technical assistance and notification of their eligibility for federal funds as factors that helped them access Title I and IDEA funds; (8) many factors that helped or hindered charter schools access federal funds had no relation to the schools' receiving their funds directly from the state or indirectly through a parent school district, but some factors did relate to the funding path; (9) for example, the working relationship between a charter school and its sponsoring district could either help or hinder the school's access to federal funds; (10) in contrast, charter schools treated as LEAs and receiving federal funds directly from the state were largely unaffected by their relationships with local school districts; (11) several states and the Department of Education have begun initiatives to help charter schools access federal funds; (12) some states are revising or developing alternative allocation policies and procedures to improve charter schools' access to federal funds and providing training and technical assistance to charter school operators; and (13) the Department recently issued guidance to states and LEAs on allocating federal Title I funds to charter schools and has funded the development of an Internet web site with information on federal programs, charter school operational issues, a charter school resource directory as well as profiles of charter school states and charter schools.
In pursuing its mission of assisting small businesses, SBA facilitates access to capital and federal contracting opportunities, offers entrepreneurial counseling, and provides disaster assistance. Program offices are located at SBA’s headquarters and include the offices responsible for oversight of the agency’s key program areas (see fig. 1). For example, the Office of Capital Access delivers services and programs to expand access to capital for small businesses. The Office of Entrepreneurial Development oversees a network of resource partners that offer small business counseling and technical assistance. The Office of Government Contracting and Business Development works to increase participation by small, disadvantaged, and woman-owned businesses in federal government contract awards. The programs it manages include the 8(a) business development program, which is designed to assist small disadvantaged businesses in obtaining federal contracts, and the Historically Underutilized Business Zone (HUBZone) program, which aims to stimulate economic development by providing federal contracting assistance to small firms in economically distressed areas. Finally, the Office of Disaster Assistance makes loans to businesses and families trying to rebuild and recover in the aftermath of a disaster. SBA delivers its services through a network of field offices that includes 10 regional offices and 68 district offices led by the Office of Field Operations (see fig. 2). SBA’s regional offices were established shortly after the agency was created in 1953. These offices, which are managed by politically appointed administrators, play a part in supervising the district offices and promoting the President’s and SBA Administrator’s messages throughout the region. District offices conduct marketing, outreach, and compliance reviews. Considered by officials as SBA’s “boots on the ground,” district offices serve as the point of delivery for most SBA programs and services and work with resource partners to accomplish the agency’s mission. SBA’s field structure has been revised over the years. In response to budget reductions, SBA streamlined its field structure during the 1990s by downsizing regional and district offices and shifting supervisory responsibilities to headquarters. The 10 regional offices originally acted as intermediaries between headquarters and the field and served as communication channels for critical information, policy guidance, and instructions. SBA downsized these offices and reallocated some of the regional offices’ workload to district and headquarters offices and created the Office of Field Operations to act as the field’s representative in headquarters and help facilitate the flow of information between headquarters and district offices. The Office of Field Operations also provides policy guidance and supervision to regional administrators and district directors in implementing SBA’s goals and objectives. Regional offices continue to play a supervisory role by monitoring performance against district goals and coordinating administrative priorities with the districts. Since the early 2000s, SBA has further restructured and centralized some key agency functions. For example, from 2003 through 2006, SBA completed the centralization of its 7(a) loan processing, servicing, and liquidation functions from 68 district offices to 1 loan processing center, 2 commercial loan servicing centers, and 1 loan liquidation and guaranty purchase center. From fiscal years 2003 to 2006, headquarters full-time equivalents (FTE) decreased from 1,154 to 1,089 (see fig. 3). District office FTEs decreased from 1,285 in fiscal year 2003 to 997 in fiscal year 2006, and regional office FTEs remained about the same. In fiscal year 2014, headquarters FTEs were 1,429, district office FTEs were 771, and regional office FTEs were 31. Despite long-standing organizational challenges affecting program oversight and human capital management that we and others have identified, SBA has not documented an assessment of its overall organizational structure, which could provide information on how best to address these challenges. Since its last major reorganization in 2004, the agency has continued to face long-standing organizational and workforce challenges, including complex overlapping responsibilities among offices, poor communication between headquarters and district offices in the administration of programs, and persistent skill gaps, especially in field offices. These challenges can affect SBA’s ability to deliver its programs consistently and effectively, especially in a climate of resource constraints. But its response has been limited to making incremental (piecemeal) changes to some of its divisions to, among other things, consolidate functions or change reporting relationships and offering employees early retirement in an attempt to address skill gaps. SBA told us that it has assessed its organizational structure but did not provide documentation of the results of the assessment. SBA continues to face program oversight and human capital challenges related to its organizational structure. In a January 2003 report on SBA’s management challenges, we found that SBA’s organizational structure created complex overlapping relationships among offices that contributed to challenges in delivering services to small businesses. In 2004, SBA centralized its loan functions by moving responsibilities from district offices to loan processing centers. However, some of the complex overlapping relationships we identified in 2003 still exist (see fig. 4). Specifically, SBA’s organizational structure often results in working relationships between headquarters and field offices that differ from reporting relationships, potentially posing programmatic challenges. District officials work with program offices at SBA headquarters to implement the agency’s programs but report to regional administrators, who themselves report to the Office of Field Operations. For example, the lender relations specialists in the district offices work with the Office of Capital Access at SBA headquarters to deliver programs but report to district office management. Similarly, the business opportunity specialists in the district offices work with the Office of Government Contracting and Business Development at SBA headquarters to assist small businesses with securing government contracts but report to district office management. Further, some officials have the same duties. The public affairs specialists at the district offices and the regional communications directors both handle media relations. In addition, district directors and regional administrators both are to conduct outreach to maintain partnerships with small business stakeholders such as chambers of commerce; lending institutions; economic development organizations; and federal, state, regional, and local governments. They also participate in media activities and speak at public events. In later reports, we and others—including SBA itself—identified organizational challenges that affected SBA’s program oversight and human capital management. In a March 2010 report on the 8(a) business development program, we identified a breakdown in communication between SBA district offices and headquarters (due in part to the agency’s organizational structure) that resulted in inconsistencies in the way district offices delivered the program. For example, in about half of the 8(a) files we reviewed we found that district staff did not follow the required annual review procedures for determining continued eligibility for the program. We found that the headquarters office responsible for 8(a) did not provide clear guidance to district staff. In addition, we found that confusion over roles and responsibilities led to district staff being unaware of the types and frequency of complaints across the agency on the eligibility of firms participating in the 8(a) program. As a result, district staff lacked information that could be used to help identify issues relating to program integrity. We made six recommendations that individually and collectively could improve procedures used in assessing and monitoring the continued eligibility of firms to participate and benefit from the 8(a) program. SBA agreed with the six recommendations when the report was issued. As of July 2015, SBA had taken actions responsive to four of the recommendations. Specifically, it had assessed the workload of business development specialists, updated its 8(a) regulations to include more specificity on the criteria for the continuing eligibility reviews, developed a centralized process to collect and maintain data on 8(a) firms participating in the Mentor-Protégé Program, and implemented a standard process for documenting and analyzing complaint data. Under the Mentor-Protégé Program, experienced firms mentor 8(a) firms to enhance the capabilities of the protégé, provide various forms of business developmental assistance, and improve the protégé’s ability to successfully compete for contracts. The two remaining recommendations yet to be fully implemented as of July 2015 focus on (1) procedures to ensure that appropriate actions are taken for firms subject to early graduation from the program and (2) taking actions against firms that fail to submit required documentation. We maintain that these recommendations continue to have merit and should be fully implemented. noted that this lack of communication could have not only inhibited the sharing of crucial information but also caused inconsistencies in the examinations across field offices. It concluded that these weaknesses in the examination process had diminished the agency’s ability to identify regulator violations and other noncompliance issues in the operation of the program. The OIG recommended that SBA create and execute a plan to improve the internal operations of the examination function, including a plan for better communication. Although SBA disagreed with the recommendation, the agency issued examination guidelines that in 2015 the OIG deemed satisfactory to close the recommendation. In documentation requesting fiscal years 2012 and 2014 Voluntary Early Retirement Authority and Voluntary Separation Incentive Payments (VERA/VSIP) programs, SBA said that long-standing skill gaps (primarily in field offices) that had resulted from the 2004 centralization of the loan processing function still existed. SBA determined that its organizational changes had resulted in a programmatic challenge because employees hired for a former mission did not have the skills to meet the new mission. Specifically, before the centralization field offices had primarily needed staff with a financial background to process individual loans. But the new mission required staff who could conduct small business counseling, develop socially and economically disadvantaged businesses and perform annual financial reviews of them, engage with lenders, and conduct outreach to small businesses. While it has made incremental (piecemeal) changes, SBA has not documented an organizational assessment that it first planned to undertake in 2012. According to federal internal control standards, organizational structure affects the agency’s control environment by providing management’s framework for planning, directing, and controlling operations to achieve agency objectives.A good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. Further, internal control guidance suggests that management periodically evaluate the organizational structure and make changes as necessary in response to changing conditions. Since its last major reorganization in 2004, SBA has seen significant changes, including decreases in budget and an increase in the number of employees eligible to retire. Despite the organizational and managerial challenges it has faced, SBA’s changes to its organizational structure since fiscal year 2005 have been incremental and largely involved program offices at headquarters rather than field offices where we and others have identified many of the organizational challenges. For example, SBA reestablished its Office of the Chief Operating Officer to improve efficiency and restructured the Office of Human Capital Management in response to significant turnover. In addition, following a review of all position descriptions, the Office of Field Operations revamped district office positions to ensure that the positions aligned with SBA’s and its district offices’ strategic plans. No changes to the regional offices were made during the last 10 years. For more information on changes that SBA has made to its organizational structure since fiscal year 2005, see appendix II. In 2012, the agency committed to assessing and revising its organizational structure to meet current and future SBA mission objectives. However, the contractor that SBA hired to assess its organizational structure did not begin its assessment until November 2014. SBA officials told us that the effort was delayed because in February 2013 SBA’s Administrator announced she was leaving the agency and the position was vacant from August 2013 until April 2014. In August 2015, SBA officials told us that after the new administrator reviewed business delivery models and became acclimated to the agency, the agency procured a contractor and work began on the organizational assessment in November 2014. According to the statement of work, the contractor was to assist the chief human capital officer by making recommendations on an agency-wide realignment to improve service delivery models, modernize systems and processes, and realign personnel, among other things. SBA officials told us the contractor completed its assessment in March 2015 and that SBA had completed its assessment of the contractor’s work. However, SBA has not provided documentation that shows when the assessment was completed or that describes the results. Instead of conducting its planned assessment and subsequent reorganization when initially scheduled, SBA used two VERA/VSIP programs to attempt to address workforce challenges, including those related to field offices, resulting from the 2004 reorganization. As noted previously, SBA had identified ongoing skill gaps resulting from the 2004 centralization of the loan processing function. These gaps were primarily in district offices, which are supervised by regional offices. SBA determined that this organizational change had resulted in a gap between the competency mix of the employees who had been hired for one mission (loan processing) and the competency mix needed to accomplish a new mission (business development, lender relations, and outreach). SBA noted that the skill gap was particularly pronounced among 480 employees in two job series—GS-1101 and GS-1102—that included business opportunity specialists, economic development specialists, and procurement staff. In addition, SBA stated that the skill gap had been compounded by recent changes in job requirements and new initiatives that required new skill sets for its employees. SBA’s plans in the aftermath of the fiscal year 2014 VERA/VSIP program include restructuring that would address the skill gaps. Specifically, an October 2014 guidance memorandum on staffing the agency-wide vacancies after the fiscal year 2014 VERA/VSIP stated that an Administrator’s Executive Steering Committee for SBA’s Restructuring would make decisions about restructuring. The memorandum also stated that the chief human capital officer had been tasked with identifying vacant FTEs for new positions that would support any new functions or initiatives envisioned by the administrator’s restructuring efforts. For example, the memorandum noted that 82 of the 147 vacancies would be used to support the restructuring, but did not include details of how these positions would be allocated among program offices. The memorandum added that the remaining 65 vacancies would remain in their respective program offices and that the position descriptions would be modified or positions relocated to meet internal needs. According to SBA, options for restructuring and related hiring were still being considered as of May 2015. We also report on these issues in a related, soon-to-be-released report on SBA’s management and make recommendations as appropriate. Regional administrators supervise and evaluate the district offices within their regions. For example, they help to ensure that the district offices within their boundaries are consistently meeting agency goals and objectives. Field office performance is tracked and assessed by goals, measures, and metrics reports and is largely driven by district office performance. SBA headquarters officials, in consultation with regional and district officials, set the goals and measures for the district offices in part on the basis of a “capacity planner” that considers the number of staff and their positions. Regional office goals are generally the combined goals for the district offices within the region. The six goals are: protecting public funds and ensuring regulatory compliance; supporting lending to small businesses; expanding contracting to small businesses; supporting small business training and counseling; providing outreach to high-growth and underserved communities; and serving as a voice for the small business community. Under these goals are a total of 54 measures that cover specific areas. For example, “maintain and increase active lending” and “expand lender participation through direct outreach” are measures under the goal “supporting lending to small businesses.” According to SBA, 6 of the 10 regions met or exceeded all of their goals in fiscal year 2014. Further, regional administrators are the interface between the district offices and SBA headquarters, overseeing staff across their regions. In particular, they supervise and provide direction to the district directors in their region, who report directly to them. The regional administrators are to meet with district directors quarterly to evaluate progress on meeting critical elements tied to their job descriptions. In addition, according to officials regional administrators may assign district directors tasks to help create a team effort within the region—for example, to explain new legislation that affects small businesses or to focus on alternative financing. Officials we interviewed cited a number of internal communication channels that involved the regional offices. In general, regional administrators help to facilitate communication between headquarters and district offices, specifically concerning program implementation, and serve as the regional points of contact for the Office of Field Operations. As SBA develops proposals for new initiatives, the agency convenes panels that include field officials and are often led by regional administrators. SBA officials cited a panel that was looking at upgrading technology in district offices as an example of a panel that was co-chaired by a regional administrator. In addition, program changes are typically communicated to the Office of Field Operations, which then talks to the regional administrators to get input on how the changes could affect the field offices. Office of Field Operations officials said that they hold a weekly conference call with all 10 regional administrators. If a program is being changed or a new initiative introduced, a manager from the relevant program office would participate in this meeting to provide the information to the regional administrators. The regional administrators share best practices for carrying out their role during these meetings. The regional administrators also told us that they had weekly calls with district office management for their region to discuss agency initiatives and obtain input. Finally, during an annual management conference regional administrators meet with each SBA division to discuss how to implement the programs in the field. Most of the 60 officials we interviewed from the Office of Field Operations and regional and district offices thought that internal communication was effective and sufficient. Senior officials from the Office of Field Operations said that the presence of regional offices enhanced agency communication. All 10 regional administrators pointed to regular communication that occurs between headquarters and the field. For example, one regional administrator noted that having a field office structure fostered effective communication. Fourteen of 19 district managers emphasized that communication within the agency was seamless and that, in addition to scheduled calls and meetings, they communicated with program offices during the course of their work. Twelve of the 28 nonmanagement staff noted that communication was effective and sufficient. However, 5 of the 28 district office nonmanagement staff and 3 of the 19 managers who we interviewed expressed concerns about communication between headquarters and field offices. For example, one district official said that communication was inconsistent and that at times industry officials might know about a program change before district staff had been informed. Another district official said that communications came from too many different sources. For example, program changes were not always consistently communicated to the field offices, and such information could come from the Office of General Counsel instead of a program office. We and the SBA OIG have identified communication challenges that affected program oversight and made recommendations to address these challenges (as discussed earlier in this report). Externally, regional officials are responsible for interpreting, supporting, and communicating the President’s and SBA Administrator’s policies as well as for setting regional priorities. Each regional office has a regional communications director tasked with coordinating SBA’s marketing, communications, and public affairs functions throughout the assigned area. According to SBA, this responsibility includes authorizing all outgoing communication within the region to avoid duplication in communication duties conducted by district offices. Regional administrators attend public speaking engagements, are involved in press activities, and conduct outreach and coordination with small business partners and government officials. Regional administrators also regularly work with representatives from local and state governments and collaborate with economic development departments to help promote SBA’s products and services. In addition, they help manage interagency relations and maintain relationships with industry representatives and suppliers, including in geographic areas that may have unique small business needs. Because regional office costs represent a relatively small part of SBA’s overall costs, closing them would have a limited budgetary effect. However, according to SBA officials closing these offices could cause nonbudgetary challenges such as difficulties in providing supervision to 68 district offices and broadcasting the President’s and SBA administrator’s message. If such closures were to occur, other options exist that could help ensure that these functions are performed effectively. However, it would be important to assess the feasibility of these options and weigh the related costs and benefits before deciding on a course of action. In fiscal year 2013, SBA’s costs for the regional offices totaled slightly more than $4.7 million. Given that these costs constituted less than 1 percent of SBA’s approximately $1 billion appropriation for that year, closing the regional offices would have a limited budgetary effect. The bulk of regional office costs went to compensation and benefits, which totaled $4.5 million in fiscal year 2013. Other administrative costs for the 10 regional offices totaled just $234,539, with individual office budgets ranging from $11,771 to $36,692. According to officials, these funds were spent on travel, equipment, and office supplies. All 10 regional offices are co-located with district offices, so they are not incurring separate rental costs. Further, because (as noted previously) each regional office generally has five employees or fewer, they are not materially affecting district office rental costs. Over half of the headquarters (Office of Field Operations), regional, and district office managers (18 of 32) we interviewed cited challenges that could result if regional offices were to close and their functions were transferred to headquarters and district offices, but a few nonmanagement staff (6 of the 28) offered different views. The challenges managers cited were related to oversight, workload, advocacy, and outreach. First, as mentioned earlier, regional administrators supervise and evaluate the performance of the district offices, responsibilities that would likely have to be transferred to headquarters. The 10 regional administrators oversee between 4 and 10 district offices each. Fifteen headquarters, regional, and district managers we interviewed said that without regional supervision, all 68 district directors would likely report to two senior officials in the Office of Field Operations. Eight of these officials said that it would be difficult for these two individuals to manage all 68 districts and to understand the economic, political, and other nuances of each district. Second, four regional and district managers we interviewed noted that one of the regional administrators’ responsibilities was to help “even out” the workload among district offices to ensure that the offices could continue to carry out their responsibilities even with critical vacancies. For example, regional administrators can request that a lender relations specialist in one district office take on additional duties to help another district office that has lost staff. Thus, the managers were concerned that without regional offices, district offices would be challenged to address such workload issues. Third, according to six district managers, the district offices would lose their advocates for resources if the regional offices closed. For instance, regional administrators identify training and staffing needs across the region and emphasize these issues during their interactions with the Office of Field Operations. Officials we interviewed also noted that without regional offices, SBA would lose its knowledge of regional needs, which headquarters and district offices might not have. These officials stated that regional administrators had a broad view of the district offices in their regions and could see differences and similarities among offices. For example, a district official noted that a regional administrator might be aware of a specific issue within a particular district office, see the similarities with the challenges of another district office, and develop a solution. Fourth, six headquarters, regional, and district managers we interviewed said that SBA would experience challenges in promoting SBA’s message without the regional offices. Thirteen headquarters, regional, and district officials emphasized that as political appointees, regional administrators played a greater role than district directors, who are career officials, in explaining and amplifying the President’s and SBA Administrator’s message and priorities. For example, officials cited the role of regional administrators in informing small businesses, during the time when the Patient Protection and Affordable Care Act was pending, of how the bill might affect them. Conversely, six nonmanagement district staff we interviewed and union officials told us that they did not see a particular need for the regional offices. Three district officials said that they could coordinate directly with headquarters instead of coordinating with the regional offices. One of these district officials noted that SBA would be more efficient if the functions of the regional and district offices were consolidated. Another of these district officials could not identify the impact of the regional offices, despite the regional administrators’ stated roles in providing guidance and supervision. In addition, union officials stated that the outreach responsibilities of the district directors and regional administrators were duplicative, pointing out that both regional and district officials did outreach to small businesses in their communities. However, as noted previously regional communications directors are expected to authorize all outgoing communication within the region to avoid duplication. We recognize that the regional administrators and other staff in the regional offices provide a number of services for SBA. However, if closures were to occur, there are options available to address these challenges. For example, one option could involve adding career senior officials to the Office of Field Operations to address the challenge of overseeing the 68 district offices. In addition, to address the challenge of the potential loss of flexibility in managing district office workloads, district directors could coordinate with each other to help distribute the workload among their offices. Alternatively, this responsibility could be assigned to the Office of Field Operations. An option to address the challenge associated with the loss of regional administrators as advocates would be having district directors collaborate to identify the needs of the various offices and advocate directly to the Office of Field Operations. However, before deciding on whether regional offices should be closed or selecting an alternative option, it is important to carefully assess the feasibility of these options as well as any others and to weigh the costs and benefits associated with available options and closure of the regional offices. We sent a draft of this report to SBA for review and comment. SBA provided technical comments that we incorporated into the report as appropriate. As part of these comments and in response to a GAO point that certain types of statement constitute prohibited “grassroots” lobbying, SBA clarified that “o SBA employee, whether career or political, is authorized or encouraged to ‘grassroot’ lobby … to support or oppose pending legislation.” We modified our draft report to take into account SBA’s clarification. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to SBA and appropriate congressional committees. This report also will be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report (1) examines any challenges associated with the Small Business Administration’s (SBA) organizational structure; (2) describes the specific responsibilities of the regional offices; and (3) discusses the budgetary effects of closing the regional offices and SBA managers’ and staff’s views on other possible effects of closures. For the background, we analyzed data on staffing levels at headquarters, regional, and district offices from fiscal years 2003 through 2014 (to include staffing levels prior to and after SBA’s last major reorganization in 2004). To assess the reliability of these data, we interviewed SBA officials from the Office of Human Resource Solutions to gather information on the completeness and accuracy of the full-time equivalent database and examined the data for logical inconsistencies and completeness. We determined that the data were sufficiently reliable for the purposes of reporting on staffing levels. For all objectives, we interviewed SBA headquarters officials in the Office of Field Operations, the 10 regional administrators, management and nonmanagement staff at 10 district offices, and union representatives. Specifically, to obtain perspectives from SBA district office officials, we selected a nonrandom, purposive sample of 10 of the 68 district offices, 1 from each SBA region to provide national coverage. We randomly selected 7 of the 10 district offices from those offices located within the continental United States. We selected the Washington, D.C., and Georgia district offices to pre-test our interview questions because of proximity to GAO offices. We selected the New York district office to include an additional large office to better ensure a variety of offices with both a larger and smaller number of employees. During our visit to 9 of the 10 district offices, we interviewed the office managers (district directors and deputy district directors). At the remaining district office, the deputy district director could not attend the meeting. For our interviews with nonmanagement staff at the 10 district offices, district office management invited any interested nonmanagement staff to meet with us. However, as a condition of meeting with nonmanagement staff, SBA’s general counsel required inclusion of district counsel in these interviews. Of the approximately 120 nonmanagement district staff members invited to speak with us, 28 participated in the interviews. We generally met with the participating staff as a group. Because participation by nonmanagement staff members was limited, we provided them an additional opportunity to share their perspectives via e-mail. Specifically, we sent an e-mail to all nonmanagement staff at those 10 district offices, inviting them to share their thoughts on specific topics by sending an e- mail to a specified GAO e-mail address. Nine staff members from 6 of these offices responded to our e-mail, three of whom also attended our interviews. The e-mails were used as additional information sources and to corroborate what we heard in the interviews. The results of our interactions with the 10 district offices cannot be generalized to other SBA district offices. The group of union representatives we interviewed was from headquarters and the field. In conducting this review, we focused on the role of regional offices. A related, soon-to-be-released GAO report addresses a range of SBA management issues. To review SBA’s organizational structure, we reviewed prior GAO and SBA Inspector General reports that discussed, among other things, the effect of the agency’s structure on its human capital management and program oversight. We also examined documentation on changes to SBA's organizational structure from fiscal year 2005 to 2014 (the period after SBA’s last major reorganization in 2004). Specifically, we requested and reviewed all of the forms that SBA used to document organizational changes that were approved during this period. We also reviewed documentation on SBA’s planned efforts to assess its organizational structure—including its Strategic Human Capital Plan Fiscal Years 2013- 2016, guidance implementing its fiscal year 2014 Voluntary Early Retirement Authority (VERA) and Voluntary Separation Incentive Payments (VSIP) programs, and the statement of work for a contractor’s assessment of organizational structure—and compared these plans to federal internal control standards. To determine the specific responsibilities of the regional offices, we reviewed position descriptions for the regional administrator and regional communications director, and compared them to the position descriptions for the district director and public affairs specialist. In addition, we interviewed officials at headquarters, regional, and district offices. We also analyzed data on the 10 regional offices from the field office goals, measures, and metrics reports from fiscal year 2014 (the most currently available data). To assess the reliability of these data, we reviewed the goals, measures, and metric reports for outliers and interviewed officials from the Office of Field Operations to obtain information on the completeness and accuracy of the database. We determined that the data were sufficiently reliable for the purpose of reporting on field performance. To determine how closing SBA’s regional offices could affect SBA, we analyzed fiscal year 2013 operating budgets and compensation and benefits data for the regional offices. SBA had to create a report that separated regional costs from other field office costs, and fiscal year 2013 data were the most recent data available at the time they generated the report. To assess the reliability of these data, we examined the data for logical inconsistencies and completeness and reviewed documentation on the agency’s financial system. We also interviewed officials from the Office of the Chief Financial Officer to gather information on the completeness and accuracy of the budget database. We determined that the data were sufficiently reliable for the purpose of reporting on SBA’s regional office costs. Additionally, we reviewed documentation on the tenure of regional administrators and acting regional administrators from fiscal years 2005 through 2014 to determine turnover. We also interviewed SBA officials about the costs of operating the regional offices and the potential effects of transferring the responsibilities of the regional offices to the district offices. We conducted our work from June 2014 through September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Small Business Administration (SBA) made a number of incremental (piecemeal) changes to its organizational structure in fiscal years 2005- 2014, as illustrated by the following examples. In 2007, SBA reorganized five program offices and four administrative support functions in order to clearly delineate reporting levels, among other things. The agency also eliminated the Chief Operating Officer as a separate office and integrated its functions into the Office of the Administrator. In 2008, the Office of Equal Employment Opportunity and Civil Rights Compliance began reporting directly to the Associate Administrator for Management and Administration to facilitate better oversight, planning, coordination, and budgeting for all of the agency’s administrative management operations. In 2010, SBA consolidated financial management by moving its procurement function to the Office of the Chief Financial Officer and transferring day-to-day procurement operations from headquarters to the agency’s Denver Finance Center. This change was intended to improve the efficiency and effectiveness of SBA’s acquisition programs. In 2011, SBA restructured the Office of Human Capital Management in response to significant turnover that had a serious effect on the level and scope of services. The reorganization streamlined the office, which was renamed the Office of Human Resources Solutions, by reducing the number of branches and divisions. In 2012, new offices were created in the Office of Capital Access to respond to, among other things, growth in small business lending programs and increased servicing and oversight responsibilities following the 2007-2009 financial crisis. The changes sought to help the agency become a better partner with lending institutions and nonprofit financial organizations to increase access to capital for small businesses. In 2012, SBA established a new headquarters unit within the Office of Government Contracting and Business Development and made it responsible for processing the continued eligibility portion of the annual review required for participants in the 8(a) program. Prior to this change, district officials, who are also responsible for providing business development assistance to 8(a) firms, were tasked with conducting exams of continued eligibility. While district officials have continued to perform other components of the annual review, shifting the responsibility for processing continued eligibility to headquarters was designed to eliminate the conflict of interest for district officials associated with performing both assistance and oversight roles. In 2012, the Office of Field Operations revamped field office operations following a 2010 review of all position descriptions to ensure that they aligned with SBA’s strategic plan and its district office strategic plans. Many position descriptions were rewritten, although there were no changes in grade or series. Before the review, district offices had two principal program delivery positions—lender relations specialist and business development specialist. As a result of the review, descriptions for both positions were rewritten, and the business development specialist position became two—economic development specialist and business opportunity specialist. The skills and competencies for the new position descriptions focused on the change in the district offices’ function from loan processing to compliance and community outreach in an effort to address skill gaps. As a result, staff were retrained for the rewritten positions. In 2013, SBA reestablished the Office of the Chief Operating Officer (formerly the Office of Management and Administration) to improve operating efficiency. Among other things, this change transferred Office of Management and Administration staff to the reestablished office, along with the Office of the Chief Information Officer and the Office of Disaster Planning, which saw its mission expanded to include enterprise risk management. In addition to the contact named above, A. Paige Smith (Assistant Director), Meredith P. Graves (Analyst-in-Charge), Jerry Ambroise, Emily Chalmers, Pamela Davidson, Carol Henn, John McGrail, Marc Molino, Erika Navarro, William Reinsberg, Deena Richart, Gloria Ross, and Jena Sinkfield made key contributions to this report.
SBA was created in 1953, and its regional offices were established shortly thereafter. In the late 1990s and early 2000s, the agency downsized the staff and responsibilities of the regional offices. These offices, which are managed by politically appointed administrators, are currently responsible for supervising SBA's district offices and promoting the President's messages throughout the region. GAO was asked to review SBA's current organizational structure, with a focus on the regional offices. Among other objectives, this report (1) examines challenges related to SBA's organizational structure and (2) discusses the budgetary effects of closing the regional offices and SBA managers' and staff's views on other possible effects of closures. GAO reviewed documentation on changes to SBA's organizational structure from fiscal years 2005-2014 (following SBA's last major reorganization in 2004); analyzed data on fiscal year 2013 regional budgets (the most recent data SBA provided); and interviewed a total of 60 SBA officials at headquarters, all 10 regional offices, and a nongeneralizable sample of 10 of the 68 district offices (one from each region reflecting a variety of sizes). While long-standing organizational challenges affected program oversight and human capital management, the Small Business Administration (SBA) has not documented an assessment of its overall organizational structure that could help determine how to address these challenges. SBA currently has a three-tiered organizational structure—headquarters offices, 10 regional offices, and 68 district offices. SBA's last major reorganization was in 2004, when it moved loan processing from district offices to specialized centers and assigned district offices new duties, such as small business counseling. But the agency has continued to face long-standing organizational and workforce challenges, including complex overlapping responsibilities among headquarters and regional offices and skill gaps in district offices (which are supervised by regional offices). These challenges can affect SBA's ability to deliver its programs consistently and effectively, especially in a climate of resource constraints. SBA's response has been limited to (1) making incremental changes to some of its divisions such as consolidating functions or changing reporting relationships and (2) offering employees early retirement. SBA committed to assessing and revising its organizational structure in 2012 but has not yet documented this effort. Although a contractor studied SBA's organizational structure in March 2015 and SBA stated it had completed its assessment of the contractor's work as of August 2015, it has not provided documentation of this assessment. In a related, soon-to-be-released report on SBA's management, GAO assesses the agency's organizational structure and makes recommendations as appropriate. Closing SBA's 10 regional offices, as some have suggested, would have a limited effect on SBA's budget, but the impact on operations is less clear. Compensation and benefits—totaling $4.5 million in fiscal 2013—were the largest costs of regional offices, which together had other administrative costs totaling about $235,000 and were co-located with district offices. Because these costs constituted less than 1 percent of SBA's approximately $1 billion appropriation in 2013, closing the regional offices would have a limited budgetary effect. But over half of the SBA managers GAO interviewed (18 of 32) said that closing regional offices could pose operational challenges. First, headquarters, regional, and district managers said that eliminating the 10 regional administrators would require one headquarters office to supervise 68 district directors. Second, regional and district officials were concerned that SBA would lose the overall regional perspective and ability to balance workloads within regions. Third, headquarters, regional, and district managers explained that the agency would be challenged to promote SBA's message without regional offices. They emphasized the role that regional administrators play in explaining and amplifying the President's and SBA Administrator's messages and priorities. However, a few (6 of 28) nonmanagement staff GAO interviewed disputed the importance of regional administrators, some stating that district offices could coordinate directly with headquarters offices. GAO recognizes that regional administrators and offices provide a number of services for SBA. If closures were to occur, there are options available to address these challenges. However, it would be important to carefully assess the feasibility of these options and weigh the related costs and benefits before deciding on a course of action. GAO is not making recommendations in this report. However, in a related, soon-to-be-released report examining SBA management issues, GAO assesses organizational structure and makes recommendations as appropriate.
The federal government has implemented a number of initiatives to address sexual violence or mitigate its effects. For example, the Department of Justice’s (DOJ) Office on Violence Against Women (OVW), created in 1995 in order to help implement the Violence Against Women Act (VAWA), sponsors grant programs for local law enforcement agencies, prosecutors and judges, health care providers, and other organizations that assist victims of sexual violence by providing, for example, forensic medical services in sexual violence cases in rural areas and specialized counseling services for victims from underserved populations. Another office within DOJ, the Office for Victims of Crime (OVC), convened crime victim advocates and experts in 2013 as part of the Vision 21 Initiative and recommended in their report, among other things, that federal agencies collaborate and expand the collection and analysis of data on all forms of criminal victimization. The Department of Health and Human Services’ (HHS) Family Violence Prevention and Services Program supports two national resource centers on domestic violence and special-issue and culturally-specific resource centers. In addition, HHS’s College Sexual Assault Policy and Prevention Initiative was launched in 2016 and is intended to provide support for organizations that are implementing policies and practices at postsecondary schools to prevent sexual assault on their campuses. The Department of Education’s Office for Civil Rights (OCR) issued guidance to colleges and universities, one in 2011 and another in 2014, concerning the responsibilities of those institutions under Title IX of the Education Amendments of 1972 with regard to addressing sexual violence against students. OCR’s guidance sets standards for the grievance procedures institutions must adopt and publish to promptly and equitably resolve complaints brought by students alleging sex discrimination (including acts of sexual violence and sexual harassment), and recommends preventive education and training programs designed to reduce the occurrence of sexual violence on campus and improve institutions’ responses to sexual violence on campus when it does occur. In 2004, Congress passed a law that required the Secretary of Defense to develop, among other things, a comprehensive policy for the Department of Defense (DOD) on the prevention of sexual assaults involving servicemembers. In response to that statutory requirement, DOD established its sexual-assault prevention and response program in 2005, and in 2008, DOD published its first sexual assault prevention strategy. Several of the federal government’s responses to sexual violence involve data collection on the occurrence of sexual violence. For example, the Prison Rape Elimination Act (PREA) of 2003 directed DOJ to carry out studies of the incidence and effects of prison rape. The Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, as amended by VAWA in 2013, requires that all institutions of higher education that participate in federal student financial assistance programs disclose statistics on certain crimes, including those related to sexual violence, to the Department of Education (Education). Since 2005, National Defense Authorization Acts have directed DOD to, among other things, collect and report information on sexual assaults against service members. Under the Paperwork Reduction Act (PRA), the Office of Information and Regulatory Affairs (OIRA) in the Office of Management and Budget (OMB) is charged with improving the efficiency and effectiveness of federal information resources, which includes functions relating to statistical policy and coordination. Specifically, with regard to statistics, OMB’s responsibilities include: Oversight and approval of data collection: OMB reviews statistical information collections as part of its responsibility under the PRA to approve all federal agency information collections that will be administered to 10 or more people to ensure adherence with PRA standards for minimizing information collection burdens and maximizing the practical utility of information collected by federal agencies, including eliminating unnecessary duplication. Guidance and standards: OMB develops and oversees governmentwide policies, principles, standards, and guidelines for collecting and disseminating statistical information. Coordination: OMB coordinates the activities of the federal statistical system, including ensuring the integrity, objectivity, and utility of federal statistics. Oversight of budgets: OMB ensures that statistical agencies’ budget proposals are consistent with systemwide priorities for maintaining and improving the quality of federal statistics. Other entities also provide guidance to agencies that conduct statistical work. For example, the National Academy of Sciences’ Committee on National Statistics (CNSTAT) publishes Principles and Practices for a Federal Statistical Agency for newly appointed cabinet secretaries at the beginning of each presidential administration. Principles and Practices outlines basic principles for statistical agencies to carry out their missions effectively, as well as practices designed to help implement them. Different entities use federal data on sexual violence, including, for example victim advocacy groups, other special interest groups, and other federal agencies. Officials at victim advocacy groups we spoke with publish reports on topics related to sexual violence and lobby Congress for laws and programs designed to address the needs of victims. Other groups include law enforcement associations and campus safety groups that provide training and educational materials for law enforcement and campus safety personnel. Federal agencies also use data on sexual violence, for example to inform grant making decisions regarding research and program development. Four federal agencies manage at least 10 data collection efforts that include data on sexual violence, among other things. Some of these data collection efforts focus on a target population that the agency serves. For example, Education’s Clery Act data collection effort obtains information on the occurrence of sexual violence at institutions of higher education. DOD’s Defense Sexual Assault Incident Database (DSAID) and the Workplace and Gender Relations Survey of Active Duty Members (WGRA) include and collect data on sexual violence involving military service members. Others, such as the FBI’s Uniform Crime Reporting Program (UCR) data collection efforts compile data from law enforcement agencies on the general population. Those data collection efforts that include information from the general population differ in terms of the ages of respondents or individuals from whom reports of sexual violence are taken. For example, the National Intimate Partner and Sexual Violence Survey (NISVS) collects data from individuals who are 18 and older while the National Crime Victimization Survey (NCVS) collects data on household members who are 12 and older; both the Uniform Crime Reporting Program-Summary Reporting System (UCR-SRS) and the Uniform Crime Reporting Program-National Incident-Based Reporting System (UCR-NIBRS) include data from law enforcement agencies on criminal incidents involving people of all ages. Table 1 includes information about each of the 10 data collections discussed in this report, including their respective target populations. Data collection efforts that are focused on target populations—such as, the military population, institutions of higher education, and the incarcerated population—provide information on the problem of sexual violence within those groups and thus may be helpful for informing policy affecting those groups. For example, Education officials told us that Education’s Office of Federal Student Aid administers inquiries to specific campuses if Clery Act data show unusually high incidences of certain crimes, including rape. The Bureau of Justice Statistics (BJS) reports that the National Inmate Survey (NIS) and the Survey of Sexual Victimization (SSV) data provide helpful information for understanding and addressing the problem of sexual violence in prisons, jails, and juvenile correctional facilities. In March 2015, we reported on the importance of using military data on sexual violence to inform program decisionmaking. The 5 data collection efforts whose target population is a segment of the national population are the result of specific congressional mandates, while the 5 data collection efforts that focus on the general population are discretionary initiatives arising from broad agency missions. For example, BJS is mandated under the terms of the Prison Rape Elimination Act (PREA) of 2003 to collect data on sexual violence in prisons, jails, and other detention facilities. BJS conducts both the NIS and SSV in response to that mandate. In contrast, according to HHS’s Centers for Disease Control and Prevention (CDC) officials, the agency launched NISVS as part of its public health mission, and with support from the National Institute of Justice and DOD as a result of requests from organizations in the field of sexual violence prevention. The extent to which the data collection efforts focus on sexual violence also varies. Some of the data collection efforts collect information solely or primarily on the occurrence of sexual violence, such as the SSV or DSAID. Other data collection efforts have a larger focus. For example, the UCR-SRS, UCR-NIBRS, and NCVS include information on a broad spectrum of crimes, and the National Electronic Injury Surveillance System–All Injury Program (NEISS-AIP) includes information on a wide variety of types of injuries. Data collection efforts use a range of terms to describe sexual violence in publicly-available agency documentation. Specifically, the 10 data collection efforts use a total of 23 different terms to describe sexual violence. Table 2 shows the terms that data collection efforts use to describe sexual violence. Given the variation in terminology, data collection efforts may characterize the same sex act using different terms. For example, regarding sexual violence involving vaginal penetration of a victim, 6 data collection efforts include this act of sexual violence in their measurement of “rape,” 2 include it in their measurements of “nonconsensual sexual acts” or “staff sexual misconduct,” 2 include it in their measurements of “sexual assault” or “assault-sexual,” 1 includes it in its measurement of “sexual coercion,” 1 includes it in its measurement of “penetrative sexual assault,” and 1 includes it in its measurement of “sexual assault with an object.” See tables 5 through 7 in app. II for additional information on acts of sexual violence included in measurements of sexual violence by data collection effort. It is also the case that one data collection effort may use multiple terms to characterize a particular act of sexual violence, depending on the contextual factors that may be involved, such as whether the perpetrator used physical force. For example, if a victim is penetrated vaginally, NISVS may characterize that particular act as either “rape” or “sexual coercion,” and the decision as to which term is most appropriate is based on the contextual factors surrounding the act. As such, NISVS characterizes vaginal penetration of a victim as “rape” if the act involves the use of physical force or threats to physically harm the victim. On the other hand, NISVS characterizes this same act as “sexual coercion” if the act occurs after the victim is verbally pressured in a nonphysical way, for example if the perpetrator uses their influence or authority. Based on our analysis, data collection efforts rarely use the same terminology to describe sexual violence; however, when they do, there are some differences in the particular acts of sexual violence and contextual factors that they include in their measurements of those terms. For example, 4 of the 6 data collection efforts that use the term “rape” consider whether actual physical force was used and the other two do not. Three of the 6 that use the term “rape” consider whether the threat of physical force was used and the other 3 do not. See tables 8 through 10 in app. II for additional information on contextual factors included in measurements of sexual violence by data collection effort. In general, measurements of sexual violence closely relate to definitions of sexual violence in federal data collection efforts. For 5 of the data collection efforts we reviewed, the acts of sexual violence and contextual factors that are included in the measurements generally align with the acts of sexual violence and contextual factors that are included in the definitions. However, for the remaining 5 data collection efforts, some of the acts of sexual violence or contextual factors are included in both their measurements and definitions and others are not. Further, these data collection efforts do not have publicly-available descriptions of what is included in their respective measurements to allow persons using the data to understand the differences. Specifically, Clery Act data include attempted rapes in its measurement of rape, but do not include attempted rapes in its definition of rape. NEISS-AIP includes acts of sexual violence involving penetration of a victim with an object and acts of sexual violence involving a victim being made to penetrate someone else with an object in its measurement of assault-sexual, but does not explicitly include these acts of sexual violence in its definition of assault-sexual or in the description of assault-sexual in the NEISS-AIP coding manual. Additionally, NCVS includes the contextual factors of “victim unable to consent or refuse” and “victim alcohol/drug facilitated” in its measurements of rape and sexual assault, but does not include these contextual factors in its definitions of rape and sexual assault. Similarly, SSV includes attempted nonconsensual sexual acts in its measurement of nonconsensual sexual acts, but does not include attempts in its definition of nonconsensual acts. NIS includes the act of victim penetration with an object in its measurements of nonconsensual sexual acts and staff sexual misconduct, but does not include it in its definitions of nonconsensual sexual acts and staff sexual misconduct. The National Academy of Sciences’ Principles and Practices for a Federal Statistical Agency states that data releases from a statistical program should include the methods and assumptions used for data collection and reporting. Similarly, OMB guidelines regarding the federal Information Quality Act call for agencies that disseminate government information to ensure its utility, objectivity, and integrity, which refers to information reproducibility and transparency. Additionally, federal internal control standards state that an agency’s information requirements should consider the expectations of both internal and external users and that reliable internal and external information sources should provide data that faithfully represent what they purport to represent. Education officials told us that they are updating The Handbook for Campus Safety and Security Reporting and expect to issue the updated handbook in summer 2016. This may provide an opportunity for Education to eliminate discrepancies between the Clery Act’s sexual violence measurements and definitions. Regarding NEISS-AIP, CDC officials told us that the definition of “assault-sexual” is intended to include the range of sexual assault experiences that victims presenting to the emergency department may report. BJS officials told us that it is not possible to enumerate every act of sexual violence that is included under NCVS’s terms of sexual violence and that data users can make their own determinations about what acts of sexual violence are included in the measurements. BJS officials also told us that the definitions included in the NIS and SSV summary reports are intended for the general public, and acknowledged that the reports present ambiguity regarding which acts of sexual violence and contextual factors are included in the measurements. CDC and BJS officials told us that researchers who are interested in descriptions of what is included in the measurements for NEISS-AIP, NIS, and SSV could access coding information, which is available at the University of Michigan’s National Archive of Criminal Justice Data of the Interuniversity Consortium for Political and Social Research. However, a layperson who is not a researcher may not know how to access this information. If data users are seeking to understand what sexual acts and contextual factors are included in a data collection effort’s measurement of sexual violence, they may read the definitions of terms contained in reports of those data collections. If the definitions of the terms are different from what the data collection effort includes in its measurement of those terms, data users may lack clarity about what acts of sexual violence and contextual factors the efforts are including in their measurements of sexual violence. Federal agencies generally collect data on sexual violence within one of two contexts—criminal justice or public health. For the purposes of this report, “criminal justice” describes data collection efforts that refer to acts of sexual violence as “crimes” or “offenses.” “Public health” describes data collection efforts that seek to understand the health implications of acts of sexual violence. Of the 10 data collection efforts within our scope, 7 collect data primarily in a criminal justice context, and 2 collect data primarily in a public health context, and 1 data collection effort combines both contexts. Table 11 in app. II outlines which data collection efforts fall into each category. According to agency officials, context can determine how each data collection effort is designed and how the data are collected—and context may inform what is included in the measurements of sexual violence in the data collection efforts. The data collection efforts of BJS and FBI included in this report have a criminal justice focus, intended to collect information on crimes, victims, or trends. CDC officials stated that their data collection efforts—NEISS-AIP and NISVS—which both have a public health focus, are more concerned with assessing the health impacts of victimization and informing violence prevention efforts and are less concerned with categorizing incidents as crimes. In some public health surveys, interviewers begin by asking basic health and lifestyle questions to establish a rapport with the interviewee and introduce the concepts of public health and experiences rather than crimes and criminal events. According to BJS and CDC officials, some studies with a focus on criminal justice may ask questions about sexual violence as crimes that have occurred, whereas some studies with a public health focus may ask questions about violent sexual experiences by describing specific acts of sexual violence while avoiding criminal terminology. For instance, NISVS does not use the word rape when questioning interviewees, whereas NCVS asks respondents if anyone has been attacked for example by rape, attempted rape, or other type of sexual attack. NISVS program documentation states that the term rape may carry a stigma or have different meanings to different people, so the survey poses multiple questions about behaviorally specific sexual acts, without using the term rape. Federal sexual violence data primarily come from two sources: information reported to authorities and information obtained from victim surveys. Some data collection efforts compile information reported to relevant authorities. Other federal data collection efforts obtain their data from surveys where agencies attempt to identify victims from a larger population and invite them to share information about their experiences with sexual violence. Table 12 in app. II outlines which data collections fall into each category. Information reported to authorities may originate from situations in which a victim or observer reports an alleged act of sexual violence to law enforcement, campus, or prison authorities or to military officials. Data collection efforts vary in how and to whom the information is submitted. For DSAID, restricted and unrestricted reports of sexual violence are reported by victims to sexual assault response coordinators or victim advocates who input information on the incident into DSAID. Whereas for Clery Act data, all institutions of higher education that receive federal student financial aid are required to report campus security data (including information on sexual violence) to the Department of Education. Some data collection efforts obtain information on sexual violence through surveys. Each survey uses different methods to collect data from their subjects. For instance, NISVS employs a random digit dialing telephone survey while the NCVS uses a mix of face-to-face and telephone interviewing. Both types of data sources involve tradeoffs. With respect to information reported to authorities, according to agency documentation and a senior official from a law enforcement special interest group, data collection efforts that provide information on crimes reported to authorities are useful for administrative and funding decisions related to law enforcement. For example, the Bureau of Justice Assistance uses UCR- SRS data, in part, to determine how much grant funding should be awarded to state, local, and tribal governments through the Edward Byrne Memorial Justice Assistance Grant program, which made $255.7 million in funding available to states, territories, and localities in fiscal year 2015. However, one limitation is that these data may underestimate the scope of the problem, since acts of sexual violence are historically underreported to authorities. As such, obtaining information through a survey may identify more instances of sexual violence than efforts that rely on information reported to authorities. However, surveys also have their limitations. Surveys are subject to variable response rates over time, and different surveys may have different response rates, which may affect the resulting estimates and the validity of the data. For example, response rates of the data collection efforts included in our review range from 24 percent for WGRA in 2012 to 33 percent for NISVS in 2011 to 84 percent for NCVS in 2014. Survey results may also be subject to response biases, for example the tendency for a respondent to provide untruthful but socially acceptable responses or the tendency for individuals who either have or have not experienced the action (e.g., sexual violence) to not participate in the survey (which can lead to nonresponse bias). Also, according to BJS officials, obtaining information directly from victims creates a burden on survey respondents and interview subjects, and research and officials from an entity that uses federal data on sexual violence stated that such surveys may face difficulties in getting people to discuss victimization experiences with strangers during interviews. Furthermore, administrative costs associated with surveys and interviews can affect the practical frequency of data collection. Different federal data collection efforts measure and report different aspects of the occurrence of sexual violence. Some data collection efforts report the number of incidents that involved an act of sexual violence, some report the number of unique victims of sexual violence, and some report information about the number of times an act of sexual violence occurred. On the surface it may appear that the number of incidents that involve an act of sexual violence and the number of times an act of sexual violence occurred are synonymous, but that is not necessarily the case. Multiple offenses could occur in the same incident, and one incident could involve multiple victims. For example, a perpetrator could both rob and sexually assault someone in the same incident, or a perpetrator could carry out sexually violent acts against multiple victims in the same incident. Table 3 outlines the units of measurement that each data collection effort reports. The 4 data collection efforts that report the number of times an act of sexual violence occurred—NCVS, NIS, UCR-SRS, and UCR-NIBRS— may not record the exact number of acts of sexual violence from each individual incident in some cases. These data collection efforts count one act of sexual violence per victim per incident, meaning they would capture acts against multiple victims in a single incident but not multiple acts against each individual victim. The data source each federal data collection effort uses is an important determinant of the units of measurement that each study uses in reporting its results. Data collection efforts that use information reported to authorities generally measure the number of incidents or reports that involve sexual violence. Those efforts may or may not publish a count of the number of acts of sexual violence that occurred. Such efforts are generally not set up to count the number of unique victims across incidents; they may report a count of victims, but have no mechanism to avoid the double-counting of victims who experience multiple incidents and are thus not measuring the same quantity that surveys that seek to report a number of unique victims measure. By contrast, data collection efforts that use information obtained from victim surveys generally measure and report the number of victims rather than the number of incidents that involved sexual violence. These efforts also collect data on the number of separate times each individual respondent has been a victim of sexual violence. However, the agencies operating those studies are cognizant of the challenges associated with asking respondents about multiple incidents, particularly with respect to the respondent’s ability to accurately recall multiple experiences of sexual violence. For example, a DOD official stated that WGRA asks questions about how many times a crime has been committed against the respondent, but DOD does not report a total number of times that an act of sexual violence occurred because respondents’ recall of multiple events may be subject to memory biases and confirming details of each act may be overly burdensome in a survey. Federal agencies collect sexual violence data for different periods of time and report the data at different frequencies. Table 13 in app. II outlines the time frames for each federal data collection effort. The data collection efforts cover different intervals of time for which an act of sexual violence occurred. For instance, NISVS asks whether each respondent has experienced sexual violence during the previous 12 months and during the respondent’s lifetime. NCVS asks whether each respondent has experienced sexual violence during the previous 6 months. By contrast, the data collection efforts that compile reports from authorities may capture information at the point in time when the event was reported to those authorities. Some data collection efforts release their results annually, whereas others do so less often. For example, NISVS collected data annually except in 2014 and issues reports periodically. NISVS most recently reported results to the public in 2014 using data collected in 2011. Most data collection efforts that compile reports from authorities release their results annually. One expert we interviewed stated that the period of time for which data on sexual violence are collected and how often data are publicly reported affects each data collection effort’s results regarding the occurrence of sexual violence, an observation also found in academic literature and a nongovernmental report. For example, data on lifetime experience of sexual violence may yield larger numbers of rape and sexual assault than data on experiences of sexual violence in the last 6 or 12 months. According to officials from 2 entities that use federal data on sexual violence, data that are reported annually may be more useful for trend analysis than data reported less frequently. The differences across the data collection efforts may hinder understanding of the extent of sexual violence, and agencies have taken steps to clarify differences and harmonize the data collection efforts. However, these efforts have been fragmented and more could be done to increase understanding of the problem of sexual violence. Collectively, the differences across federal data collection efforts lead to differing estimates of sexual violence, for example rape, in the United States, as shown in selected data collection efforts on the general population in table 4. According to research and individuals we spoke with who are familiar with the data, differences in federal data on sexual violence may confuse the public. A National Academy of Sciences’ Panel that studied the incidence of rape in the United States reported in 2014 that the data collection efforts’ different purposes and methodologies produce different results, which creates confusion for the public, law enforcement, policymakers, researchers, and victim advocacy groups. Additionally, officials from four entities that use federal data told us that they believe the public does not understand data from federal sources on sexual violence. Officials from three entities that use federal data on sexual violence stated that they—and the media—cite a range of sources and may not always, or adequately, explain the details of the data collection efforts. In addition, the public may not take the time to understand the differences among the data collection efforts. For example, an official from one entity told us that the entity frequently uses the results from one particular data collection effort to educate the public on sexual violence, but the official was not aware of certain methodological details and limitations of the data. Also, as previously discussed, some data collection efforts’ measurements and definitions do not align and information on what is included in these measurements is not publicly available, which may lead to confusion for data users. Further, officials from the federal agencies and entities we spoke with that use federal data on sexual violence emphasized that the differences across the data collection efforts are such that the results are not comparable. Officials we spoke with who use the data stated that differences in measurements, definitions, and methodology across the data collection efforts can lead to confusion. Officials at one entity stated that they found challenges in using federal data on sexual violence because varying measurements and definitions across the data collection efforts make it difficult to compare data. However, even in instances where the acts of sexual violence and contextual factors that are included in measurements and definitions are similar across data collection efforts, other differences create challenges. For example, officials at the National Center for Campus Public Safety stated that whereas the Clery Act data program and UCR-SRS use the same definition for rape, the methodologies are different—the Clery Act data program collects information on allegations made in “good faith,” whereas the UCR-SRS includes only information on incidents resulting in a police report—which results in different estimates of rape and may lead to confusion for users who try to compare the data. Because there is wide variation in the results, entities that use federal data on sexual violence have a choice of which data to use, and entities reported using data that best suited their needs. For example, officials from one entity told us that they use NCVS data because it includes information on incidents not reported to the police and is user-friendly, and officials from another entity told us they use NCVS data because it has a larger sample size than other data collection efforts. Officials from another entity stated that they use NISVS because it includes the most expansive set of acts of sexual violence and contextual factors in its measurement of “rape” and estimates lifetime prevalence rates. Federal agencies have acknowledged that differences exist among data collection efforts, for example in terms of methodology, context, and data sources, which has led some agencies to take steps to identify and explain differences across the data collection efforts. In addition, some federal agencies have taken steps to lessen the differences among data collection efforts by focusing on harmonization—that is, coordination of practices to enhance data collection to achieve a shared goal. However, such efforts have been fragmented—that is, they are limited in scope and generally involve two data collection efforts at a time. Two ongoing efforts are intended to clarify the differences across two data collection efforts: BJS and FBI coauthored a statement describing the differences between UCR and NCVS. In 1995, BJS and FBI coauthored statements entitled “The Nation’s Two Crime Measures,” which describe similarities and differences between the UCR program and NCVS. The statement was updated in September 2014. BJS and FBI post similar but different versions of the statement on their websites. BJS’s statement provides a side-by-side description of FBI’s UCR program and BJS’s NCVS including, for example, information on historical background, data sources, and time frames for data collection and reporting. Both statements also include a section on comparing UCR and NCVS data, which describes the data collection efforts’ similarities (e.g., they both have somewhat similar subsets of serious crimes, such as rape, robbery, aggravated assault, burglary, theft, and motor vehicle theft ) and key differences (e.g., definitions of certain crimes). The statements conclude with a description of the two data collection efforts’ strengths (e.g., UCR provides data on the number of crimes reported to law enforcement and NCVS provides data on the number and types of crimes not reported to law enforcement). CDC and BJS have discussed publishing a statement that compares sexual violence statistics in NISVS and NCVS. At a meeting in November 2015, CDC and BJS discussed coauthoring a statement about NISVS and NCVS that would describe the differences and similarities of the two data collection efforts. There are five efforts underway or recently implemented that are intended to increase harmonization across the data collection efforts, including: Education adopted FBI’s UCR-SRS definition of rape for use in the Clery Act data. Education, in its 2014 rule implementing the VAWA 2013 reauthorization, changed the definition of rape that is used in the Clery Act data to match UCR-SRS’ definition. The definition used by both UCR-SRS and the Clery Act data is “Penetration, no matter how slight, of the vagina or anus with any body part or object or oral penetration by a sex organ of another person, without the consent of the victim.” Education began collecting data using the new definition of rape for calendar year 2014. BJS sponsored the National Academy of Sciences’ CNSTAT Panel on Estimating the Incidence of Rape and Sexual Assault in BJS Household Surveys. In March 2011, BJS charged the panel to “assess the quality and relevance of statistics on rape and sexual assault from NCVS and other surveys contracted for by other federal agencies as well as surveys conducted by private organizations,” examining issues such as the “legal definitions in use by the states for these crimes, best methods for representing the definitions in survey instruments so that their meaning is clear to respondents, and best methods for obtaining as complete reporting as possible of these crimes in surveys, including methods whereby respondents may report anonymously.” The panel, which was comprised of experts in research and sexual assault response, held five in-person meetings and issued 15 recommendations in a final report published in 2014. For example, the panel recommended that BJS’s definitions of sexual violence be expanded to include victimizations when the victim does not have the capacity to consent to the sexual actions of the offender and that this research be conducted in a coordinated manner because many of the issues to be investigated are interrelated. The panel also recommended that the survey questionnaire should have a neutral context, such as a health survey. BJS officials told us that some of their current work addresses some of the panel’s recommendations. For example, BJS is currently conducting a methodological comparison of NCVS with a public health approach that includes a five-city comparison study and consultation with CDC. The five-city comparison study involves an expanded scope of sexual violence, which includes questions regarding consent, and BJS is testing a range of behavior-specific questions. BJS officials told us that they are planning to issue a report on the progress of the project in spring 2016. BJS and FBI commissioned a National Academy of Sciences’ CNSTAT Panel on Modernizing the Nation’s Crime Statistics. Commissioned in 2013, the panel will assess and make recommendations for the development of a modern set of crime measures in the United States and the best means for obtaining them. The review will focus, among other things, on full and accurate measurement of criminal victimization events and their attributes, considering types of crime (and their definitions), including the current scope of crime types covered by existing FBI and BJS data collections; gaps in knowledge of contemporary crime; development of international crime classification frameworks that should be considered in increasing international comparability; and the optimal scope of crime statistics to serve the needs of the full array of data users and stakeholders—-federal agencies, other law enforcement agencies, Congress, other actors in the justice system (such as the courts and corrections officials), researchers, and the general public. Panel membership includes academics in the field of criminal justice and statistics and stakeholders who use and provide the data that the government collects. According to the chair of the panel, as part of the first phase of work, the panel has developed an initial conceptualization for classifying all types of crime, including rape and sexual assault. The recommended classification and its justification appear in the panel’s first report, which was released in May 2016. The panel has also begun the second phase of its work, which will suggest the means for gathering data for the comprehensive crime classification, including information from non-BJS or FBI sources, and recommend how crime data collection should proceed in practice. The panel plans to consider possible coordination among agencies to produce more comprehensive reports on data, instead of one agency producing one report and another agency producing a separate but related report. The panel intends to finish its second phase of work in 2016 and issue a final report in early 2017. CDC, partnering with BJS, plans to convene a Technical Expert Panel to examine ways to improve NISVS. As part of OMB’s review of a CDC information collection proposal, OMB requested that CDC and BJS officials convene a panel of experts in survey methods to improve NISVS’s methodology, including increasing the response rate and minimizing non-response bias. The agencies identified and recruited a panel of experts and plan to meet in spring 2016. CDC provided DOD with an adapted dataset from NISVS. In 2010, NISVS included two random samples of active duty women and wives of active duty men in addition to a random sample of the general U.S. population. Using identical survey methods, data were collected in the first two quarters of 2010. According to DOD officials, CDC provided a subset of NISVS data to DOD so that DOD received information on crimes that fell under military law to enable DOD to do an “apples to apples” comparison of CDC and DOD data. CDC officials informed us that they are working on a follow-up military population study in 2016. However, these various federal efforts to clarify and harmonize sexual violence data have been fragmented. CNSTAT’s Principles and Practices for a Federal Statistical Agency (2013) calls for federal agencies that produce similar federal statistics, with different missions, to coordinate and collaborate with each other to meet current information needs and provide new or more useful data than a single system can provide. While the guidance applies primarily to the 13 federal statistical agencies, OMB officials stated that the report provides best practices for all federal data collection activities. The guidance encourages collaborative interagency efforts and highlights the importance of agencies developing standard definitions as a way to maximize the value and comparability of data. However, the coordination that has occurred across the agencies that collect data on sexual violence has been limited. Specifically, coordination has been bilateral—generally involving only 2 of the 10 data collection efforts at a time and limited in scope. Agency officials expressed skepticism that broader harmonization efforts could benefit federal data on sexual violence, stating that each data collection effort is designed— through target population, measurements and definitions, and methodology—to fulfill a certain purpose. Agency officials told us that changes to any of these differences may undermine the specific purpose of each data collection effort. However, harmonization does not necessarily entail making the data collection efforts identical; instead, it could entail agencies considering how they could make their efforts more complementary and, as appropriate, more alike without compromising their programmatic needs. The Paperwork Reduction Act (PRA), among other things, established a process for OMB to oversee agency information collection efforts in order to improve the quality and use of federal information while reducing collection burdens, including through the coordination of federal statistics. Per its authority under the PRA, OMB has convened interagency working groups to assess differences across data collection efforts and determine which of those differences are beneficial and which are unnecessary. For example, OMB has convened the following interagency groups: The Interagency Working Group for Research on Race and Ethnicity was formed in 2014 to exchange research findings, identify implementation issues, and collaborate on a shared research agenda to improve federal statistics on race and ethnicity. The Interagency Working Group on Measuring Relationships in Federal Household Surveys, which was established in 2010, convenes representatives from a variety of federal agencies involved in the collection, dissemination, or use of household relationship data to address the challenges in measuring household relationships, including same sex couples. The Federal Interagency Forum on Child and Family Statistics was formally established in 1997 to develop priorities for collecting enhanced data on children and youth, improve the reporting and dissemination of information on the status of children to the policy community and the general public, and produce more complete data on children at the state and local levels. We asked OMB if there are plans to convene a similar group for harmonizing data on sexual violence. OMB staff stated that they did not have plans to form an interagency group on the topic, but instead they plan to invest limited resources strategically by engaging with BJS on its redesign of NCVS and with CDC on its information quality of NISVS. OMB staff also stated that their plans are to ensure that both agencies are taking advantage of the insights gained as each agency undergoes redesign and technical consultations in the next couple of years. However, other data collection efforts, in addition to NCVS and NISVS, also influence policy decisions on sexual violence. Depending upon the outcome of the work being conducted by BJS and CDC, OMB may encourage other data collection efforts to adapt or adopt insights gained as appropriate to their respective programmatic missions. In the absence of broader harmonization efforts, agency sexual violence data continue to be inconsistent and incomparable, leading to confusion about the data and lack of clarity about the scope of the problem of sexual violence in the United States. Differences in data collection efforts—particularly in terms of what is included in measurements and definitions of sexual violence and methodologies—collectively can lead to confusion. Without publicly- available information on which acts of sexual violence and contextual factors are included in the measurements of sexual violence, data users may lack clarity about what each data collection effort’s results represent. Additionally, entities that use federal data may misunderstand the data and develop policies that may not be based on the full extent of the problem. In the absence of collaboration among agencies that manage data collection efforts, it is unclear which differences enhance and which impair the overall understanding of sexual violence, and as a result, policy makers and the public lack coordinated information by which to address the problem. To enhance the clarity and transparency of sexual violence data that is reported to the public, we recommend that the Secretary of Education direct the Assistant Secretary for the Office of Postsecondary Education, the Secretary of Health and Human Services direct the Director of CDC, and the Attorney General direct the Director of BJS to make information on the acts of sexual violence and contextual factors that are included in their measurements of sexual violence publicly available. This effort could entail revising their definitions of key terms used to describe sexual violence so that the definitions match the measurements of sexual violence. To help lessen confusion among the public and policy makers regarding federal data on sexual violence, we recommend that the Director of OMB establish a federal interagency forum on sexual violence statistics. The forum should consider the broad range of differences across the data collection efforts to assess which differences enhance or hinder the overall understanding of sexual violence in the United States. We provided a copy of our report to DOD, Education, HHS, DOJ and OMB for their review and comment. The agencies provided technical comments, which we incorporated as appropriate. Education, HHS, and DOJ also provided written comments, which are reprinted in appendices IV, V, and VI, respectively. The OMB liaison to GAO provided us with comments via email, which are summarized below. DOJ, Education, and HHS agreed with our recommendation that, in order to enhance the clarity and transparency of sexual violence data, they should make information on the acts of sexual violence and contextual factors that are included in their measurements of sexual violence publicly available. In their written comments, DOJ and Education described actions they have recently taken or plan to take to implement the recommendation. DOJ stated that beginning in calendar year 2017, BJS will provide the exact computer code used to construct its measures of sexual violence as well as additional information on how sexual violence is defined and measured. Education stated that in June 2016, the department released an updated version of The Handbook for Campus Safety and Security Reporting, which provides additional details on what acts of sexual violence and contextual factors are included in the data collection effort’s sexual violence measurements. HHS stated that the department is committed to improving the quality of the data and the clarity of the descriptions and definitions of sexual violence. In an email responding to our recommendation that OMB establish a federal interagency forum on sexual violence statistics, OMB stated that it did not believe convening a forum at this time was the most strategic use of resources. OMB stated that other interagency groups it has convened were typically about statistical methods or measurement issues that would affect a wide swath of government and for which OMB guidance or a best practice working paper would be forthcoming. OMB noted that there are only four agencies involved in collecting sexual violence data, and regarded none to be conducting or far enough along in its research for OMB to develop guidance or identify best practices at this time. OMB does, however, plan to follow closely and participate in CDC’s and BJS’s ongoing technical work, and will consider convening or sharing information across agencies when that work is further along. We understand the importance of allowing time for a data collection effort to mature before providing guidance or best practices. However, considering that 7 of the 10 data collection efforts have been in place for more than 10 years, and several have been in place for multiple decades, we disagree with OMB’s assertion that none of the data collection efforts are far enough along for OMB to provide guidance and best practices. DOJ and Education also commented on the recommendation to OMB. DOJ stated that BJS welcomes OMB efforts to coordinate data collection and reporting on sexual violence and stands ready to participate in an interagency forum. Education stated that efforts to “harmonize” definitions should not be pursued solely to achieve symmetry for its own sake, but that the focus should be on the needs of each individual program. We agree that the data collection efforts should continue to meet the needs of individual agencies; however, considering the number of federal data collection efforts, the range of differences across them, and the potential for causing confusion, it would be beneficial for agencies to discuss these differences and determine whether they are, in fact, necessary. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director of OMB, the Attorney General, the Secretaries of Defense, Education, and Health and Human Services, and other interested parties. The report will also be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Our objectives for this report were to address the following questions: (1) What are the federal efforts underway to collect data on sexual violence, and how, if at all, do these efforts differ? (2) How do any differences across the data collection efforts affect the understanding of sexual violence, and to what extent are federal agencies addressing any challenges posed by the differences? To address the first question, we identified federal efforts to collect data on sexual violence, for which the data: provided information on the extent to which acts of sexual violence occur in the United States in a particular year (for example, the number of times a rape or sexual assault has occurred or the number of victims of rape and sexual assault); were collected recently (i.e., 2010 or after); were collected periodically (i.e., at least once every 2 years); were reported publicly; and were not focused primarily on minors. To identify efforts that met these criteria, we reviewed past GAO reports and a Federal Bureau of Investigation (FBI) list of federal agencies that may collect crime data. We asked officials at those agencies if they had any data collection efforts that met our criteria. Additionally, we asked experts in the field (for example, academic researchers) and officials from victim advocacy groups and other special interest groups about any additional federal data collection efforts they were aware of that met our selection criteria. We initially identified experts and entities that use federal data on sexual violence by conducting background research. Then, we interviewed experts and officials from the entities identified in our research, and from these contacts we identified additional entities that use federal data on sexual violence. In all, we spoke with officials from three victim advocacy groups, five other special interest groups (for example, law enforcement associations and campus safety groups), three other federal agencies, and two academic experts. Based on this work, we identified 10 data collection efforts that met our criteria across four federal agencies, including the Departments of Defense, Education, Health and Human Services, and Justice. To identify and describe differences across the data collection efforts, we obtained information on the purpose, scope, and methodology of each data collection effort. We obtained this information through a review of documents, such as user manuals and program descriptions. We also conducted interviews with senior agency officials, senior officials at entities that use federal data on sexual violence, and academic experts. Using documentary and testimonial information, we compared the similarities and differences of the data collection efforts with respect to target population; context in which data were collected; source of the data; unit of measurement; time frames (for example, when data are collected and how often data are reported by the federal agency); and terminology and measurements of sexual violence. To compare similarities and differences of terminology of sexual violence, we used agency documents to identify for each data collection effort the terms used to describe sexual violence. To identify for each data collection effort what acts of sexual violence and contextual factors are included in measurements of sexual violence, we reviewed agency documentation and interviewed agency officials. To identify how the differences affect understanding of sexual violence, we obtained and reviewed federal reports and interviewed and reviewed relevant documentation from agency officials, experts, and officials from entities that use federal data on sexual violence. We asked these officials and experts whether, in their experience, any of the differences made it difficult for people who may use the data (e.g., Congress, policy makers, academics, the general public) to understand the extent to which sexual violence occurs in the United States. We also asked them to identify any difficulties or challenges, of which they were aware, that have resulted from the differences across federal efforts to collect data on sexual violence. Because these officials and experts were not selected as a representative sample, the information obtained from these interviews applies solely to this set of officials and experts, and cannot be generalized to others. We also reviewed articles, conference papers, and government and nongovernment reports that discuss differences across federal sexual violence data collection efforts. To identify articles, a research librarian conducted a search of several bibliographic databases, such as ProQuest, Embase, and Scopus, using terms such as “rape data” or “sexual assault statistics,” among others. The search looked for peer- reviewed articles, books, and conference papers published during or after 2005. This search yielded 36 publications, 16 of which were relevant to our research objective on the impact of the differences across the data collection efforts and 20 which were not. In reviewing the identified publications, we found and reviewed an additional 9 articles and reports that were pertinent. See app. III for a list of articles and reports that we reviewed and determined to be relevant for our analysis. The librarian- assisted literature search was conducted in August 2015, and we reviewed literature from that search and identified additional sources from August 2015 to May 2016. To describe the extent to which federal agencies are addressing any challenges posed by differences across the data collection efforts, we interviewed senior agency officials and academic experts and obtained relevant documentation. We asked agency officials about what, if any, steps their agency has taken, or planned to take, to address some of the difficulties or challenges that may have resulted from the differences across the data collection efforts. We also asked agency officials as well as officials from entities that use federal data on sexual violence if they were aware of any additional steps being taken by other federal agencies, state agencies, or other national entities, etc., to address some of the difficulties or challenges that may have resulted from the differences across the data collection efforts. We conducted this performance audit from March 2015 to July 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 5 through 10 provide information on the acts of sexual violence and the contextual factors included in the measurements of sexual violence by federal data collection efforts identified by GAO. National Academy of Sciences, Committee on National Statistics, Estimating the Incidence of Rape and Sexual Assault. Washington, DC, US: National Academies Press, Washington, DC, 2014. . “Overcoming Challenges Related to Data Collection and Measurement.” Forced Migration Review, vol. 27, no. Jan (2007): 28-29,. Addington, L. and C. Rennison. “Rape Co-Occurrence: Do Additional Crimes Affect Victim Reporting and Police Clearance of Rape?” Journal of Quantitative Criminology, vol. 24, no. 2 (2008): 205-226. Bachman, R. “Measuring Rape and Sexual Assault: Successive Approximations to Consensus.” A paper commissioned for the National Academy of Sciences, June 6, 2012. Campbell, R, A. E. Adams, and D. Patterson. “Methodological Challenges of Collecting Evaluation Data from Traumatized Clients/Consumers - A Comparison of Three Methods.” American Journal of Evaluation, vol. 29, no. 3 (2008): 369-381. Chen, Y. and S. E. Ullman. “Women’s Reporting of Sexual and Physical Assaults to Police in the National Violence Against Women Survey.” Violence Against Women, vol. 16, no. 3 (2010): 262-79. Clay-Warner, J. and J. McMahon-Howard. “Rape Reporting: ‘Classic Rape’ and the Behavior of Law.” Violence and Victims, vol. 24, no. 6 (2009): 723-743. Cohen, M.A. and A. R. Piquero. “New Evidence on the Monetary Value of Saving a High Risk Youth,” Journal of Quantitative Criminology, vol. 25 (2008). Cohn, A. M., H. M. Zinzow, H. S. Resnick, and D. G. Kilpatrick. “Correlates of Reasons for Not Reporting Rape to Police: Results from a National Telephone Household Probability Sample of Women with Forcible Or Drug-Or-Alcohol Facilitated/Incapacitated Rape.” Journal of Interpersonal Violence, vol. 28, no. 3 (2013): 455-473. Cook, S. L., C. A. Gidycz, M. P. Koss, and M. Murphy. “Emerging Issues in the Measurement of Rape Victimization.” Violence Against Women, vol. 17, no. 2 (2011): 201-18. Delisi, M., A. Koloski, M. Sweeny, E. Hachmeister, M. Moore, A. Drury. “Murder by Numbers: Monetary Costs Imposed by a Sample of Homicide Offenders,” The Journal of Forensic Psychiatry & Psychology, vol.21, no. 4 (2010). Fang, H., M. T. French, and K. E. McCollister. “The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation.” Drug Alcohol Dependence, vol. 108, no.1-2 (2010). Du Mont, J., K. Miller, and T.L. Myhr, “The Role of “Real Rape” and “Real Victim” Stereotypes in the Police Reporting Practices of Sexually Assaulted Women.” Violence Against Women, vol. 9, no. 466 (2003). Fisher, B. S. “The Effects of Survey Question Wording on Rape Estimates: Evidence from a Quasi-Experimental Design.” Violence Against Women, vol. 15, no. 2 (2009): 133-47. Gardella, J. H., C. A. Nichols-Hadeed, J. M. Mastrocinque, J. T. Stone, C. A. Coates, C. J. Sly, and C. Cerulli. “Beyond Clery Act Statistics: A Closer Look at College Victimization Based on Self-Report Data.” Journal of Interpersonal Violence, vol. 30, no. 4 (2015). Lynch, J. P. “Clarifying Divergent Estimates of Rape from Two National Surveys.” The Public Opinion Quarterly, vol. 60, no. 3 (1996) 410-430. Lynch, J. P. and J. L. Lauritsen. “Modernizing the Nation’s Crime Statistics.” The Criminologist, vol. 40, no. 2 (2015). Miller, T.R., M. A. Cohen, and B. Wiersema, “Victim Costs and Consequences: A New Look.” A final summary report presented to the National Institute of Justice, January 1996. Murphy, S. B., K. M. Edwards, S. Bennett, S. J. Bibeau, and J. Sichelstiel. “Police Reporting Practices for Sexual Assault Cases in which ‘the Victim does Not Wish to Pursue Charges’.” Journal of Interpersonal Violence, vol. 29, no. 1 (2014): 144-156. Palermo, T. and A. Peterman. “Undercounting, Overcounting and the Longevity of Flawed Estimates: Statistics on Sexual Violence in Conflict.” Bulletin of the World Health Organization, vol. 89, no. 12 (2011): 924-925. Saltzman, L. E., K. C. Basile, R. R. Mahendra, M. Steenkamp, E. Ingram, and R. Ikeda. “National Estimates of Sexual Violence Treated in Emergency Departments.” Annals of Emergency Medicine, vol. 49, no. 2 (2007): 210-7. Simon, T. R., M. Kresnow, and R. M. Bossarte. “Self-Reports of Violent Victimization among U.S. Adults.” Violence and Victims, vol. 23, no. 6 (2008): 711-726. Violence Against Women: A Statistical Overview, Challenges and Gaps in Data Collection and Methodology and Approaches for Overcoming Them. A report of the expert group meeting organized by the UN Division for the Advancement of Women in collaboration with the Economic Commission for Europe and World Health Organization, April 11-14, 2005. Weiss, K. G. “‘You Just Don’t Report that Kind of Stuff’: Investigating Teens’ Ambivalence Toward Peer-Perpetrated, Unwanted Sexual Incidents.” Violence and Victims, vol. 28, no. 2 (2013): 288-302. Wolitzky-Taylor, K. B., H. S. Resnick, J. L. McCauley, A. B. Amstadter, D. G. Kilpatrick, and K. J. Ruggiero. “Is Reporting of Rape on the Rise? A Comparison of Women with Reported Versus Unreported Rape Experiences in the National Women’s Study-Replication.” Journal of Interpersonal Violence, vol. 26, no. 4 (2011): 807-832. In addition to the contact named above, individuals making key contributions to this report were Kristy Love, Assistant Director; Meghan Squires, Analyst-in-Charge; Tim Young; Kirsten Leikem; Janelle House; David Alexander; and David Plocher. Diana Maurer, Tom Jessor, Janet Temko-Blinder, Tovah Rom and Eric Hauswirth also provided valuable assistance.
Concerns have grown about sexual violence—in general, unwanted sexual acts—in the United States, particularly involving certain populations such as college students, incarcerated individuals, and military personnel. Data on the occurrence of sexual violence are critical to preventing, addressing, and understanding the consequences of these types of crimes. GAO was asked to identify and compare federal efforts to collect data on sexual violence. This report addresses two questions: (1) What are the federal efforts underway to collect data on sexual violence, and how, if at all, do these efforts differ? (2) How do any differences across the data collection efforts affect the understanding of sexual violence, and to what extent are federal agencies addressing any challenges posed by the differences? GAO reviewed agency documentation and academic literature, and interviewed agency officials. Four federal agencies—the Departments of Defense, Education, Health and Human Services (HHS), and Justice (DOJ)—manage at least 10 efforts to collect data on sexual violence, which differ in target population, terminology, measurements, and methodology. Some of these data collection efforts focus on a specific population that the agency serves—for example, the incarcerated population—while others include information from the general population. These data collection efforts use 23 different terms to describe sexual violence. Data collection efforts also differ in how they categorize particular acts of sexual violence. For example, the same act of sexual violence could be categorized by one data collection effort as “rape,” whereas it could be categorized by other efforts as “assault-sexual” or “nonconsensual sexual acts,” among other terms. In addition, five data collection efforts—overseen by Education, HHS, and DOJ—reflect inconsistencies between their measurements and definitions of sexual violence. Further, these data collection efforts do not have publicly-available descriptions of what is included in their respective measurements to allow persons using the data to understand the differences, which may lead to confusion for data users. Publicly-available measurement information could enhance the clarity and transparency of sexual violence data. Data collection efforts also differ in terms of the context in which data are collected, data sources, units of measurement, and time frames. Differences in data collection efforts may hinder the understanding of the occurrence of sexual violence, and agencies' efforts to explain and lessen differences have been fragmented and limited in scope. Differences across the data collection efforts may address specific agency interests, but collectively, the differences lead to varying estimates of sexual violence. For example, in 2011 (the most recent year of available data), estimates ranged from 244,190 rape or sexual assault victimizations to 1,929,000 victims of rape or attempted rape. These differences can lead to confusion for the public. Officials from federal agencies and entities GAO spoke with who use federal data on sexual violence emphasized that the differences across the data collection efforts are such that the results are not comparable, and entities reported using data that best suited their needs. Agencies have taken some steps to clarify the differences between the data collection efforts. For example, two DOJ entities coauthored a statement that describes the differences between their two efforts. In addition, agencies have taken some steps to harmonize the data collection efforts—that is, coordinate practices to achieve a shared goal. However, actions to increase harmonization have been fragmented, generally only involving 2 of the 10 data collection efforts at a time, and limited in scope. The Office of Management and Budget (OMB) through its authority to coordinate federal statistics has previously convened interagency working groups, such as the Interagency Working Group for Research on Race and Ethnicity, to improve federal statistics. OMB has no plans to convene a working group on sexual violence data. Additional collaboration, facilitated by OMB, between agencies that manage data collection efforts about which differences help or hinder the overall understanding of sexual violence could help to clarify the scope of the problem of sexual violence in the United States. GAO recommends that Education, HHS, and DOJ make information that is included in their measurements of sexual violence publicly available. GAO also recommends that OMB establish a federal interagency forum on sexual violence data. Education, HHS, and DOJ agreed with the recommendation. OMB stated that convening a forum may not be the most effective use of resources at this time, in part because the data collection efforts are not far enough along in their research. However, OMB said it will consider convening or sharing information across agencies in the future.
Social Security is one the largest federal programs in the United States, providing about $546 billion in benefits in 2006 to over 49 million beneficiaries. Although the majority of Social Security benefits are paid to retirees, Social Security does much more than provide retirement income. Social Security Disability Insurance (DI) pays monthly cash benefits to nearly 7 million workers who, due to a severe long-term disability, can no longer remain in the workforce. Additionally, Social Security provides benefits to over 11 million dependents, including payments to widows and widowers as well as surviving parents and children under Survivors’ Insurance (SI), plus benefits to dependent spouses and children of retired and disabled workers paid from the Old Age Insurance (or Old Age) and DI trust funds. Social Security benefits often represent a significant source of income for their recipients, providing an average of $1,051 a month (as of July 2007) to retired workers, $995 a month to widows and widowers, and $979 a month to disabled workers. Although disabled workers and dependents receive slightly lower average monthly benefits than retired workers, benefits could be particularly important to these individuals. These beneficiaries may face considerable hardships; for example, a disabling condition may make work and other activities of daily living more difficult. As a result, these beneficiaries may have financial difficulties planning and preparing for death or disability in the way one might plan for retirement. Social Security was never intended to provide an adequate income by itself, but instead serves as an income base on which to build. In fact, the Social Security program balances the goals of income adequacy with individual equity, i.e., that lower income beneficiaries should receive higher benefits relative to wages than higher income beneficiaries (adequacy), and beneficiaries with higher lifetime income receive higher benefits in accordance with their income/lifetime contributions (equity). Although Social Security had originally been envisioned to include disability and survivors’ insurance, the 1935 Social Security Act created only a retirement program. Over the next 40 years, the program expanded both the size and type of its benefits, introducing benefits for dependents and disabled workers (fig. 2). The first new type of benefits went to dependents, as the 1939 amendments offered payments to elderly dependent wives and widows, as well as dependent children. (Some husbands and widowers were allowed to receive these same benefits after 1950). Creating these benefits was not only seen as socially desirable, but also offered additional protections for workers and their families from risk and spent down surpluses created by the system. Disability Insurance, which had been recommended by the 1938 and 1948 advisory councils, was established in 1956 to provide cash benefits to permanently disabled workers over the age of 50. The DI program was later expanded to include disabled workers under the age of 50 as well. In 1961, widows benefits increased from 75 to 85 percent of their deceased spouse’s benefits, and then to 100 percent in 1972. In addition, eligibility was extended to divorced spouses as well as to the spouses and children of disabled workers. Furthermore, benefit levels for retirees, dependents, and disabled worker beneficiaries grew during this time period. However, facing solvency crises, legislative efforts to control the size of the Social Security program were made in the mid 1970s and early 1980s. In order to maintain trust fund solvency, major changes were enacted to reduce the growth of Social Security benefit levels from the mid1970s to the early 1980s. Additionally, a number of legislative changes to the DI and dependents’ programs eliminated or reduced certain benefits and tightened the eligibility standards for receiving other benefits. However, despite ongoing fiscal concerns, eligibility for a few dependents’ and disability benefits has been expanded since 1975, suggesting an interest in protecting some vulnerable populations who may rely on Social Security for a significant portion of their monthly income. Although recent reform proposals have focused on elements intended to improve solvency, there continues to be some interest in protecting some or all DI and dependents’ benefits from potential benefit reductions. Figure 3 shows how Old Age, Survivors, and Disability Insurance (OASDI) has grown financially and in terms of beneficiaries over time. Although the 1935 act did not provide for disability and dependents’ benefits, those benefits were later built upon the existing Social Security structure, and today all benefits continue to be calculated from a common formula. Dependents’ benefit levels were set as fractions of the benefits owed to the person upon whom beneficiaries depended. For example, under the 1939 legislation, a widow would receive 75 percent of her deceased husband’s benefits, and dependent children or spouses would receive 50 percent of the retired worker’s benefits. When Congress created the DI program in 1956, it provided a lower retirement age (50) for those who were permanently and totally disabled. The same benefit formula used in computing OASI benefits was adopted for disability benefits because the original DI program treated disabled workers as being forced into premature retirement. In 1960, when Congress expanded the DI program by eliminating the requirement that disabled workers had to be 50 years old, the same benefit formula applied. Because benefit types shared a common formula, automatic indexing provisions implemented in 1972 and 1977 applied across the board. The OASDI programs are tightly linked in other ways as well. These programs are financed through a common mechanism—payroll taxes; receipts from the payroll tax are deposited into the OASI and DI trust funds which, like the two programs, are separate but often combined in discussion and analysis of Social Security’s solvency and sustainability. Furthermore, beneficiaries can receive multiple types of benefits over their lifetimes, moving into, out of, and among Social Security programs at different life stages. When disabled workers reach the full retirement age (FRA), for example, they begin to receive retirement benefits from the Old Age program, in place of DI benefits; the common benefit formula keeps such individuals’ benefit levels stable. In another case, a recent widow(er) might have her (or his) retirement or spousal benefits replaced with survivors’ benefits based on the relative earnings of the deceased spouse. Because parents, children, or spouses may be eligible for dependents’ benefits through the Old Age, Survivors, and Disability Insurance programs, a person can collect several types of Social Security benefits over a lifetime, although generally not simultaneously. The many linked pieces of Social Security could make developing a single, comprehensive reform package challenging because such a package would need to take into account all of these pieces. Under current law, Old Age benefits are generally calculated through a four-step process in which a progressive yet earnings-based formula is applied to an earnings history, and then updated annually through a cost- of-living adjustment (COLA). For those who receive retirement benefits, this earnings history is generally based on the 40 years in which credited earnings were highest, with the 5 lowest-earning years dropped out (leaving the highest 35 years of indexed earnings to be included in the initial benefit calculation). Dependents’ benefit levels are determined as a given percentage of Old Age benefit levels. Eligible children and spouses can receive up to 50 percent of a worker’s benefit; widow(er)s can be given up to 100 percent; and surviving parents or children can collect up to 75 percent, subject to a family maximum. DI benefits are calculated similarly to Old Age benefits, but are generally based upon a shortened work history. (For more detail on how benefits are calculated, refer to app. II.) To be eligible for benefits, individuals must have a specified number of recent work credits under Social Security when they first become disabled. Individuals must also demonstrate the inability to engage in substantial gainful activity by reason of a physical or mental impairment that has lasted or is expected to last for twelve continuous months or to result in death. If not eligible on medical grounds, SSA must also consider age, education and past work history. In particular, medical eligibility criteria for DI are less stringent for applicants over the age of 55. Based on prior work, GAO has designated modernizing federal disability programs (including the DI program) as a high risk area because of challenges that continue today. For example, GAO found that federal disability programs remain grounded in outmoded concepts that equate medical conditions with work incapacity. While SSA has taken some actions in response to prior GAO recommendations, GAO continues to believe that SSA should continue to take a lead role in examining the fundamental causes of program problems and seek the regulatory and legislative solutions needed to modernize its programs so that they are aligned with the current state of science, medicine, technology, and labor market conditions. Moreover, SSA should continue to develop and implement strategies to better manage the programs’ accuracy, timeliness, and consistency of decision making. Social Security’s Financing Social Security is currently financed primarily on a pay-as-you-go basis, in which payroll tax contributions of current workers are used primarily to pay for current benefits. Since the mid1980s, the Social Security program has collected more in taxes than it has paid out in benefits. However, because of the retirement of the baby boomers coupled with increases in life expectancy, and decreases in the fertility rate, this situation will soon reverse itself. According to the Social Security Administration’s 2007 intermediate assumptions, annual cash surpluses are predicted to turn into ever-growing cash deficits beginning in 2017. Absent changes to the program, these deficits are projected to deplete the Social Security DI trust fund in 2026 and the OASI trust fund in 2042, leaving the combined system unable to pay full benefits by 2041. Reductions in benefits, increases in revenues, or a combination of both will likely be needed to restore long- term solvency. A number of proposals have been made to restore fiscal solvency to the program, and many include revenue enhancements, benefit reductions, or structural changes such as the introduction of individual accounts as a part of Social Security. Because many reforms to the benefit side of the equation would reduce benefits through changes in the benefit formula, they could affect DI and dependents’ benefits as well as Old Age benefits. Unless accompanied by offsets or protections, these reforms might reduce the income of disabled workers and dependents. This situation could be challenging for these beneficiaries as they may have relatively low incomes or higher health care costs and rely heavily on Social Security income. Many disabled workers and dependents may also have trouble taking on additional work and accumulating more savings and, thus, have difficulty preparing for Social Security benefit reductions. Many reform elements could have a substantial impact on the benefits of Social Security recipients, including those of disabled workers and dependents. We considered six such elements that have been included in reform proposals to improve trust fund solvency. These reform elements take a variety of forms and would change either the initial benefit calculation or the growth of individual benefits over time. Our projections indicated that most of these elements would reduce benefits from currently scheduled levels for the majority of both disabled workers and dependents. That is, most would reduce median lifetime benefits for these beneficiary types—some more substantially than others. Many of these beneficiaries would also experience a reduction in total lifetime benefits; the extent of which would depend on the reform element and individual. We considered six different reform elements that could help control costs and improve Social Security solvency by reducing benefits. Five would change how initial benefits are calculated, and one would limit the growth of an individual’s benefits over time. We considered several ways to improve solvency: Longevity indexing would lower the amount of the initial benefit in order to reflect projected increases in life expectancy. Such indexing would maintain relatively comparable levels of lifetime benefits across birth years by proportionally reducing the replacement factors in the initial benefit formula. Price indexing would maintain purchasing power while slowing the growth of initial benefits. This would be accomplished by indexing initial benefits to the growth in prices rather than wages, as wages tend to increase faster than prices. Progressive price indexing, a form of price indexing, would control costs while protecting the benefits of those beneficiaries at the lowest earnings levels (in terms of career average earnings). It would continue to index initial benefit levels to wages for those below a certain earnings threshold and employ a graduated combination of price indexing and wage indexing for those above this threshold. Increasing the number of years used in the benefit calculation would also control program costs. For example, initial benefits could be based on the highest 40, rather than 35, years of indexed earnings. This could be done either by eliminating the 5 years normally excluded from the calculation or by increasing the total number of years factored in from 40 to 45 years. In either of these cases, the initial Old Age benefit would be calculated using the highest 40 years of indexed earnings. (For more information on these reform elements and how we incorporated them into our microsimulation model, see app. I.) Raising the age at which people are eligible for full retirement benefits could change the amount and/or the timing of initial benefits. Increasing the full retirement age would improve solvency by generally increasing the number of years worked, reducing the number of years benefits are received and increasing revenue to the system through payroll taxes in the additional years worked. Further, those who retire early would have their benefits actuarially reduced. Though it would not generally affect initial benefit amounts, a change to Social Security’s cost-of-living adjustment (COLA) could also control costs and improve solvency by limiting the growth of an individual’s benefits over time. The COLA adjusts benefits to account for inflation by indexing benefits to price growth annually, using the Consumer Price Index (CPI). Setting the COLA below the CPI would limit the nominal growth of an individual’s benefits over time, and as such those who receive benefits for a prolonged period of time would see the largest reductions. According to our projections for the 1985 cohort, four of the five reform elements that we analyzed would reduce total lifetime benefits for more than three-quarters of disabled workers and dependents, relative to currently scheduled benefits. Table 1 shows the proportions of disabled workers and dependents affected by each of the reform elements. For three of the elements—reducing the COLA by one percentage point, price indexing and progressive price indexing—the percentage of disabled workers affected is very similar to the percentage of dependents affected. Moreover, for these three reform elements, more than 99 percent, or virtually all, disabled workers and dependents would see their benefits reduced. In contrast, progressive price indexing differs from other reform elements in its impact: fewer beneficiaries are affected, and the percentage of disabled workers affected varies from that of dependents. While an estimated 87 percent of dependents would experience a reduction in lifetime benefits under progressive price indexing, an estimated 77 percent of disabled workers would do so. While the COLA reduction, longevity indexing and price indexing are all designed in such a way that they affect virtually all beneficiaries, the COLA, which has a greater impact on solvency than longevity indexing, affects relatively fewer disabled workers and dependents. This is because the COLA reduction would first affect benefits one year after the initial benefit payment was made, whereas both longevity indexing and price indexing affect the initial benefit amount. Our simulations indicated that 1.11 percent of disabled workers died within the first year of receiving benefits, while only 0.35 percent of dependents did so. Most such beneficiaries would not have received a COLA. According to our simulations each of the reform elements we selected would reduce median lifetime benefits for both disabled workers and dependents relative to currently scheduled benefits (figs. 4 and 5). However, our projections also indicated that these reductions would vary by reform element. Price indexing would have the largest impact on disabled workers and dependents, reducing median lifetime benefits by more than 25 percent. Median lifetime benefits would fall from $473,960 to $343,350 for disabled workers and from $351,910 to $244,745 for dependents. Progressive price indexing, on the other hand, would create the smallest reduction in median lifetime benefits, with median lifetime benefits falling by 7 percent for disabled workers and 8 percent for dependents. Additionally, increasing the full retirement age and increasing the number of computation years would likely reduce median lifetime benefits for dependents. Since dependent benefits are linked to those of the primary worker, an increase in the full retirement age could shorten the period of time over which they both receive benefits. Alternatively, some workers may decide not to adjust their retirement plans in response to the increase in the FRA. Those who maintain their original retirement plans, retiring prior to the new FRA, will also receive reduced benefits relative to current law. (See app. II for a discussion of how benefits are adjusted for early retirement.) Thus, under both scenarios, total lifetime benefits would be reduced, and so, too, would median lifetime benefits. A similar outcome results from increasing the number of computation years by which initial benefits are calculated. By increasing the number of computation years, a worker’s earnings history is expanded to include years of possibly lower indexed earnings. As a result, total benefits for some retired workers, and therefore, their dependents, would likely be reduced, as would median lifetime benefits. Our projections suggest that, while lifetime benefits would be reduced for virtually all disabled workers and dependents, such reductions would not be uniform across individuals. Figures 6 and 7 compare beneficiaries’ total lifetime benefit reductions by each reform element, for disabled workers and dependents, respectively. If the COLA were reduced by one percentage point, our projections show that approximately 58 percent of disabled workers experienced lifetime benefit reductions of 10 percent or less, while about 42 percent of disabled workers experienced lifetime benefits reduced by 10 to 25 percent. Almost no disabled workers would see benefits fall by more than 25 percent. Certain reform elements would create reductions in total lifetime benefits for the vast majority of disabled workers and dependents. These reductions may create new hardships for certain beneficiaries, such as disabled workers, who may not be able to easily replace lost income. According to our projections, price indexing would result in the greatest benefit reductions for the largest percentage of beneficiaries, with decreases in lifetime benefits of between 25 percent and 50 percent for almost 70 percent of disabled workers and about 90 percent of dependents. Both price indexing and longevity indexing have a greater effect on initial benefit amounts the longer the reform is in place. As such, people who leave the workforce early may experience a smaller reduction in lifetime benefits than those who leave at full retirement age. For example, as shown in figures 6 and 7, longevity indexing could reduce lifetime benefits for about 86 percent of disabled workers and about 96 percent of dependents by 10 to 25 percent. Progressive price indexing may have a more moderate effect on the benefits of disabled workers and certain dependents because it is designed to protect benefit levels for low earners and gradually apply benefit reductions to beneficiaries with higher earnings. Because of shorter earnings histories, some disabled workers would be in the low end of the earnings distribution. Thus, under progressive price indexing, a greater proportion of disabled workers would be likely to have benefits adjusted by wage indexing. According to our projections, progressive price indexing would reduce total lifetime benefits by 5 percent or less for 46 percent of disabled workers and 35 percent of dependents. Various options are available to protect benefits in different ways, including accelerating the growth of an individual’s benefits, modifying current constraints on benefit levels, and exempting certain populations from reforms. Options can also target certain types of beneficiaries. We analyzed some of these protections and found they could be structured to mitigate the effects of benefit reductions for varying lengths of time. In addition, we found that specific options to protect dependent benefits could be targeted to certain vulnerable beneficiaries, such as widows and dependent children. We found a wide range of options exist for protecting disabled workers and dependents from benefit-reducing reforms. Table 2 provides a summary of the options. The protection options may be very specific in terms of whom they protect and how, or broader in scope. For example, while two protection options focus specifically on disabled adult children (DAC), others, such as partial exemptions could apply to any vulnerable population. In addition to each option having its own strengths and weaknesses, the options could interact with each other and with the various reform elements. When implementing a protection option, all of these factors could influence its impact. There are several protection options that could be applied to all disabled workers and dependents. Under a full exemption, beneficiaries would not be subject to a reform and their benefits would remain unchanged. Under a partial exemption, beneficiaries would not be subject to a reform until a certain point in time. For example, disabled workers could be exempt from benefit changes until they are converted to the Old Age program at the full retirement age. At this point, their benefit amount would be recalculated to reflect the reform in proportion to the years they spent working. In addition, a super COLA could help protect the benefits of disabled workers and dependents. A super COLA would mitigate some of the effects of a benefit-reducing reform by annually increasing benefits at a rate above the consumer price index—which is currently used to index benefits. Some protection options could cover all dependents by increasing the percentage of the worker’s benefit that the dependent receives. (See app. II for more detail on how dependent benefits are calculated.) For example, a number of proposals have called for increasing the percentage of the worker’s benefit that widow(er)s receive. Another option that could protect the benefits of a wide range of dependents would be to raise the maximum benefit that families can receive based on one worker’s earnings record. Other protection options, such as caregiver credits, could focus on protecting particular groups of dependents. Several reform proposals have, in fact, called for providing caregiver credits to individuals who spend time out of the workforce to care for their dependents or to those with reduced or low earnings while attending to care-giving responsibilities. Some proposals assign caregivers a specified level of earnings for each year the caregiver received zero or low earnings compared to prior years. Other proposals exclude zero-earning care years from the initial benefit calculation. Another option specific to a certain type of dependent would be to increase benefits for aged survivors, since they are more likely to rely on Social Security to stay out of poverty and could have fewer opportunities, such as returning to work, to respond to benefit-reducing reforms. Increasing the early retirement age could offer some protection for survivors. If the early retirement age were raised—for example, from 62 to 64—then workers who take early retirement would receive actuarially adjusted benefits for a shorter period of time under the new early retirement age, and thus their monthly benefits would be relatively higher than the monthly benefits they would have received if they had retired at the current early retirement age. Since a dependent’s benefit is linked to the worker’s initial benefit amount, an increase in the worker’s benefit would also increase the dependent’s benefit, mitigating some of the negative effects of other reforms. Similarly, raising the FRA coupled with a partial exemption from a benefit reduction could offer some additional protection for disabled worker benefits. With an increase in the FRA, disabled workers would receive (exempted) DI benefits for a longer period of time because the age at which their disability benefits are converted to retiree benefits would rise with the new FRA. In general, the reform elements we examined reduce median lifetime benefits for disabled workers and dependents. Because disabled workers may not have the financial resources—especially earnings related income—to adjust to benefit reductions, we explored the interaction of reform elements and certain options to offset them. According to our projections, protections from a reduction in the COLA could restore benefits of disabled workers to levels close to those scheduled under current law. Reducing the COLA by one percentage point would result in about a 10 percent decrease in median lifetime benefits for workers who become disabled before age 60. To offset such a decrease, they could be partially or fully exempted. With a COLA reduction, a partial exemption would mean that the Social Security Administration would increase a disabled worker’s benefits annually as scheduled under current law (i.e., using the full COLA) until the worker reached the full retirement age. At that point, the disabled worker’s benefits would grow annually by the reduced COLA (1 percentage point lower than what it would be under current law). Our projections showed that a partial exemption as described above would raise median lifetime benefits from their reduced levels by 7 percent (up to 96 percent of scheduled levels under current law). In contrast, a full exemption would allow annual COLA adjustments in line with current law until death (fig. 8). In addition to a decrease in the COLA, we analyzed options for protecting the benefits of disabled workers under three reform elements that have an impact on the initial benefit amount a disabled worker receives—price indexing, longevity indexing, and progressive price indexing. There are several protection options for mitigating the effects of these reform elements, including full and partial exemptions. In the case of price indexing initial benefits, we projected that the median lifetime benefits of disabled workers would be about 75 percent of the median benefits under current law (fig. 9). A full exemption for disabled workers would raise the benefits of those disabled workers who exclusively receive DI benefits to the currently scheduled levels. However, a partial exemption from price indexing would restore the median lifetime benefit to 89 to 90 percent of scheduled levels, depending on how the partial exemption is implemented. One type of partial exemption (Type I) uses price indexing to calculate the portion of the benefits based on the years a person is out of the workforce and receiving DI benefits. In contrast, the other type of partial exemption (Type II) uses wage indexing to cover the same time period. (For more details, please see app. I.) The difference between the two partial exemptions becomes more substantial the earlier one becomes disabled, as the difference between wages and prices increases over time. While offering some protection from benefit reductions, both types of partial exemptions involve a recalculation of benefits at the full retirement age. This recalculation would result in lower benefits for the DI recipient and could create a potential problem if that individual relied on the prior benefit amount and had limited options for replacing the lost income. (See figs. 10 and 11 for longevity indexing and progressive price indexing, respectively.) Another protection option would be to allow disability benefits to grow at a greater rate than other benefits. For example, disabled workers could be explicitly included in the scope of the reform, and receive reduced initial benefits. However, instead of receiving annual increases based on the current-law COLA, disabled workers could have their benefits increased by a “super COLA”—one that is set above the Consumer Price Index. In this case, benefits for the disabled would grow at a faster rate than they would under current law and could approach or even exceed current law levels. Variations on the super-COLA could include an “age-indexed super COLA” which would be greater for those disabled at younger ages. For those workers who become disabled near the full retirement age the COLA would be closer to that used for retirees. These protections could be particularly beneficial for disabled workers who receive benefits for a prolonged period of time. While protections for disabled workers would generally cover all such beneficiaries, the options for protecting dependent benefits could be more targeted to specific dependents and not necessarily applied to the full range of dependents, which includes spouses, divorcees, widow(er)s, and child survivors. The circumstances around which a person becomes a dependent vary greatly, as does the role of Social Security benefits in their lives. For some, Social Security may be the primary source of support; for others, it may be only a small proportion of their income. Protections could target children, who make up about 8 percent of Social Security beneficiaries, receiving benefits as the survivors or dependents of disabled or retired workers. Table 3 shows the number of children who receive benefits in each category and the average monthly benefit for these children. One way to protect the benefits of children would be to exempt them from any reform, keeping their benefit calculation tied to current law. Another way to protect their benefits to some degree would be to raise the maximum benefit a family could receive on a single worker’s earnings record. The majority of experts with whom we spoke told us that increasing the maximum amount that a family could receive from one worker’s earnings record could help protect child and other dependent benefits. Such an increase could help those dependents who are constrained by the family maximum. A family may have several people receiving benefits based on one worker’s record. The sum of the family members’ benefits may exceed the specified maximum, which is calculated as a percentage of the worker’s benefit amount. Thus, any reform that would result in a decrease in the primary benefit amount would also result in a decrease in the amount that each eligible family member would receive and a corresponding decrease in the total amount a family would receive. Certain options, including increased allowable benefits for widows or partial exemptions, could be designed to protect the benefits of widow(er)s or others who may have fewer resources available to them. Under current law, widows and widowers can collect 100 percent of their deceased spouse’s benefits (or their own benefit—whichever is greater); a “widow’s boost” would allow them to receive up to 75 percent of the couple’s combined benefits. Widow(er)s may rely on Social Security for a large percentage of their retirement income, in part because they may live many years beyond the exhaustion of other financial resources, may find it difficult to work, or may incur large health expenses that deplete their other resources. A reduction in the COLA may be particularly detrimental to the lifetime benefits of those who live long lives, because the effect of reducing the COLA is compounded over time. As such, it may be desirable to protect older widow(er)s—along with other individuals who receive benefits for a prolonged period of time—from the effect of a COLA reduction. For example, in our projections for the COLA reduction, we found that for the group of widows who received some benefits and who died before age 75, median lifetime benefits would be approximately 93 percent of those under current law. In contrast for those who lived past age 95, median lifetime benefits would be only 83 percent of currently scheduled levels. The options for protecting the benefits of disabled workers and those of dependents come at a cost to the Social Security program in terms of its solvency. In addition, some protections options may create incentives for people to apply to the Disability Insurance program if DI benefits increase while retirement benefits stay stable. Further, protection options could provide disincentives for some to return to work. The Social Security reform elements we examined were designed primarily to improve program solvency. These reform elements would generally reduce benefits from their currently scheduled but underfunded levels. While protecting the benefits of disabled workers and dependents may be socially desirable, such protection would come at some cost to the Social Security program. In particular, the protections lessen the degree to which the potential reforms could restore solvency. One could counter these costs with further benefit reductions to beneficiaries considered less vulnerable than those recipients whose benefits are specifically protected. That is, reform packages with certain benefit protections for vulnerable populations may necessitate further reductions in the benefits of retired workers or increases in revenues to achieve the intended solvency effect. In addition to the effects on solvency, some of the protections discussed may also have administrative costs associated with them. Protecting the benefits of disabled workers may increase the number of people who apply for disability benefits. This may also be relevant to certain reform elements. An increase in the full retirement age coupled with the reduction in benefits for early retirement could motivate some individuals approaching the early retirement age to apply for disability benefits, if they believed that they could qualify for the now greater DI benefits. For example, before a change in the retirement age a worker who is a year away from the full retirement age, and who would qualify for DI but is unsure of that outcome, may choose to wait and only receive Old Age benefits. Once the full retirement age is raised, this worker may choose to apply for DI, rather than waiting to receive retirement benefits. The greater the benefit disparity between the two programs, the more likely it may be that DI applications and enrollment will increase. Thus, the potential for an increase in DI program costs exists with any reform elements that decrease the generosity of the Old Age component of OASDI without a corresponding decrease in that of the DI component. Under current law, there may already be an incentive for older workers to apply for DI rather than retire early. Using individual level data from the simulation model, we analyzed the benefits of two similar individuals under current law and under price indexing with and without full and partial exemptions. Both of the simulated individuals had similar lifetime earnings, close to the median for the simulated 1985 cohort, and both would have received initial benefits at age 62. However, they differed in two significant ways: one retired at age 62, while the other was disabled at age 62, and the retiree had lower lifetime benefits under current law. The retiree, who died at age 84, had lifetime benefits of about $433,000, while the disabled worker, who died at age 82, had lifetime benefits of about $505,000—about 16 percent higher than those of the retired worker. A full exemption for disabled workers from certain reform elements could similarly create discrepancies between the two programs, resulting in incentives to apply for the DI program. Under price indexing, the lifetime benefits of both individuals would be reduced, but the relative difference would remain at about 16 percent. However, if disabled workers were fully exempted from price indexing, the simulated disabled worker’s lifetime benefits would be back to the initial amount of $505,000, or 72 percent greater than those of the retired worker. This difference in potential benefits would likely increase the incentive to apply for the DI program. Figure 12 and table 4 show the total lifetime benefits and the average monthly benefits of these two simulated individuals under current law, price indexing, and with exemptions. However, partial rather than full exemptions or other protections, such as an age-indexed super COLA, could provide benefit protections without substantially increasing the disparity between the programs for people approaching the early or full retirement ages. Under a partial exemption, in which the disabled worker would be exempted from the reform until full retirement age, the added incentive that could be created by a full exemption would be reduced. Such a partial exemption for the disabled worker in our example would result in lifetime benefits that are about 33 percent higher than those of the retired worker under price indexing. The family maximum limits the amount that can be received off of a worker’s record. This limit is compatible with the incentive for individuals to work. Changing such a limit could affect beneficiaries’ work decisions. For example, protecting benefits of dependents by increasing the family maximum could affect an individual’s work decisions. Under the current family maximum with a benefit reduction in place, if a person chooses to work 30 hours a week, an increase in the total amount a family (or individual dependents) could receive might affect this decision and decrease the person’s time in the workforce. In such a case, the individual may find that the increase in the benefits received would allow for fewer weekly hours of work without a change in total income. In addition, protections that increase the benefits of disabled workers, such as the super COLA, can also create disincentives for such beneficiaries to return to work. As such, some individuals may continue to rely on the DI program, rather than finding a way to re-enter the workforce. Social Security’s financial challenges may result in program modifications that may include benefit reductions. These benefit reductions will likely affect all beneficiaries, including vulnerable individuals who may not be able to adjust to these reductions or who rely on Social Security as their primary source of income. While protecting the benefits of vulnerable populations may be desirable, such action does come at a cost. Further benefit reductions or revenue increases would be needed to achieve program solvency. These offsets, in turn, may create new financial vulnerabilities among other beneficiaries who would bear the burden of these protections. Few reform proposals consider the impact that benefit reductions would have on all beneficiary types, instead treating all beneficiaries similarly. However, some special consideration should be given to the effects of the reform on the benefits of the most vulnerable, especially when these individuals are disproportionately affected. If the solution to Social Security’s financing problems includes benefit reductions, then the equal treatment of all beneficiaries may need to be reconsidered, and the complex interactions of benefit reductions, protections, and direct and indirect costs to the system and to other retirees will need to be weighed carefully. Benefit protections can be a part of a comprehensive reform package and the reform debate should consider the design, inclusion, and implications of such measures to assure income adequacy. Likewise, to the extent that Social Security aligns the disability program with the current state of science, medicine, technology, and labor market conditions, such modernization should also be considered. Accordingly, in light of potential reform, Congress should consider the potential implications of reform on disability and dependent beneficiaries. Such a review might usefully be coordinated with any modernization of the Social Security disability program. We provided a draft of this report to SSA and the Department of the Treasury, which generally agreed with our findings. Both provided technical comments, and SSA also provided general comments, which appear in appendix III. We incorporated the comments throughout our report as appropriate. In general, SSA concurred with the methodology, overall findings, and conclusions of the report. However, SSA felt that the report could benefit from a more direct comparison of disabled beneficiaries and retired beneficiaries (and a similar construct for dependents). While such a comparison could be beneficial and give context to the reform discussion, this report was premised on the notion that certain beneficiaries would be less able to offset benefit reductions, rather than a comparison of relative welfare. Finally, GAO agrees that one could better assess the degree to which a reform element or protection option support the program’s goal of adequacy if benefits were compared to a standard of adequacy; however such a comparison was beyond the scope of the current study. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Social Security Administration and the Department of the Treasury, as well as other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7215, if you have any questions about this report. Other major contributors include Michael Collins, Nagla’a El-Hodiri, Jennifer Gregory, Joe Applebaum, Melinda Cordero, Mark Goldwein, Meaghan Mann, and Dan Schwimer. To analyze the effects of individual reform elements and certain protections from these reforms on Social Security benefit levels for disabled workers and dependents, we simulated their benefits using the Policy Simulation Group’s (PSG) microsimulation models. We based our analysis on projected lifetime benefits for a simulated 1985 birth cohort. In order to have a point of comparison, we also used the microsimulation models to simulate Social Security benefits of retirees who receive benefits on their own record. To simulate longevity indexing, which links the growth of initial benefits to changes in life expectancy, we successively modified the PIA formula replacement factors (90, 32, 15) beginning in 2009, reducing them annually by multiplying them by 0.995. This specification mimics provision 1 of Model 3 of the President’s Commission to Strengthen Social Security (CSSS). The CSSS solvency memorandum notes that the 0.995 successive reductions “reduces monthly benefit levels by an amount equivalent to increasing the normal retirement age (NRA) for retired workers by enough to maintain a constant life expectancy at NRA, for any fixed age of benefit entitlement.” This provision as specified and scored—using the intermediate assumptions of the 2001 Trustees’ report—in the CSSS memo by SSA’s Office of the Chief Actuary would improve the long-range OASDI actuarial balance (reduce the actuarial deficit) by an estimated 1.17 percent of taxable payroll. We also simulated the effects of price indexing, where initial benefits would be indexed to the consumer price index (CPI) in order to limit the growth of benefits. We successively modified the primary insurance amount (PIA) formula replacement factors (90, 32, and 15) beginning in 2012, reducing them successively by real wage growth in the second prior year. This specification mimics provision B6 of the August 10, 2005, memorandum to SSA’s Chief Actuary regarding the provision requested by the Social Security Advisory Board (SSAB), which is an update of provision 1 of Model 2 of the CSSS. As noted in the CSSS’s solvency memorandum from SSA’s Chief Actuary, “his provision would result in increasing benefit levels for individuals with equivalent lifetime earnings across generations (relative to the average wage level) at the rate of price growth (increase in the CPI), rather than at the rate of growth in the average wage level as in current law.” This provision as specified and scored by OCACT in the SSAB memo would improve the long-range OASDI actuarial balance (reduce the actuarial deficit) by an estimated 2.38 percent of taxable payroll. To simulate the effects of implementing a progressive price index, we mimicked provision B7 of the August 10, 2005, memorandum to SSA’s Chief Actuary. We created a new bend point at the 30th percentile of earnings, beginning in 2012. We maintained current-law benefits for earners at the 30th percentile and below. We also maintained the lower two PIA formula replacement factors (90 and 32). We reduced the upper two PIA formula replacement factors (32 and 15) so that maximum worker benefits from one generation to the next grew by inflation rather than the growth in average wages. This provision as specified and scored by OCACT would improve the long-range OASDI actuarial balance (reduce the actuarial deficit) by an estimated 1.43 percent of taxable payroll. In our modeling of this reform element, we gradually reduced the number of drop out years from 5 to 0, thereby extending the number of computation years from 35 to 40. The number of computation years would increase to 36 in 2007, 37 in 2008, 38 in 2010, 39 in 2012, and 40 in 2014. This specification mimics provision B2 of the August 10, 2005 memorandum to SSA’s Chief Actuary. This provision as specified and scored by OCACT would improve the long-range OASDI actuarial balance (reduce the actuarial deficit) by an estimated 0.46 percent of taxable payroll. We also simulated a reduction in the cost-of-living adjustment (COLA) of one percentage point, beginning in 2012. This specification mimics provision A2 of the August 10, 2005, memorandum to SSA’s Chief Actuary. This provision as specified and scored by OCACT would improve the long-range OASDI actuarial balance (reduce the actuarial deficit) by an estimated 1.49 percent of taxable payroll. Some reform proposals have called for reducing the COLA by about 0.2 percent to 0.4 percent, in response to methodological concerns that the CPI for urban wage earners and clerical workers, the current CPI measure used to adjust benefits, overstates inflation. The intent of these proposals is to implement a COLA that may more accurately reflect inflation. To simulate the effects of fully exempting disabled workers from the various reform elements, we modified the simulation to exclude the benefits of disabled workers from the reform elements. As such, there would be no recalculation of benefits when the exempted beneficiary reached full retirement age. We defined partial exemptions for disabled workers to mean that their benefit would be exempted from any simulated reform until the FRA and then would be recalculated. For the COLA reduction, we simply started the one percentage point reduction at the FRA for disabled workers. However, for the reforms that involved a change in the initial benefit amount (longevity indexing, price indexing, and progressive price indexing), we simulated the recalculation of benefits at the FRA in two different ways. The first partial exemption, which we refer to as Partial Exemption Type I, followed the Kolbe-Stenholm model of converting benefits at the FRA. The Kolbe-Stenholm model reduces benefits in proportion to the difference in the disabled-worker PIA and the retired-worker PIA at the DI-onset age. This OASI benefit amount would be indexed by the COLA to for the years between the disability onset age and age 62. ) DIC is the promised DI benefit level under current law YD is the number of years (ages 21 to 62) that the disabled worker received DI benefits OASI is the OASI benefit level, calculated by computing the PIA under the reform using the formula applicable for newly eligible retired workers in the year the converting worker reached age 62. In this case, earnings from the years prior to disability would be wage indexed. The disability freeze years would apply in computing the AIME. To assess the reliability of simulated data from GEMINI, we reviewed PSG’s published validation checks and examined the data for reasonableness and consistency. PSG has published a number of validation checks of its simulated life histories. For example, simulated life expectancy is compared with projections from the Social Security Trustees; simulated benefits at age 62 are compared with administrative data from SSA; and simulated educational attainment, labor force participation rates, and job tenure are compared with values from the Current Population Survey. We found that simulated statistics for the life histories were reasonably close to the validation targets. Social Security offers a variety of types of benefits, and although they are all based upon the same formula, they are calculated in different ways. The methods for calculating the different types of benefits are outlined below. Old Age benefits are calculated through a four-step process in order to provide retirees with progressive yet wage-based cash payments (see fig. 13). First a worker’s Average Indexed Monthly Earnings (AIME) is calculated by indexing the worker’s past earnings to changes in average wage levels over the worker’s lifetime and then averaging them. The AIME formula considers all years in which a worker earned covered earnings. It then uses the number of elapsed years from 1950 or attainment of age 21 through the age of 62 (or death) and allows for 5 “drop-out years” so that the worker’s highest 35 years of covered indexed earnings are used in the calculation. Once the AIME is determined, a progressive formula is applied to the AIME to yield a worker’s Primary Insurance Amount (PIA). In 2007, the PIA formula had the following bend points: 90 percent of the first $680 of AIME, plus 32 percent of the next $3,420, and 15 percent of any earnings above that level (fig. 13). For example, the PIA of a worker whose AIME was $1000, the equivalent of at $12,000 annual salary, would be the sum of $612 (90 percent of $680) and $102.40 (32 percent of $320), yielding a total initial monthly benefit of around $715. Similarly, the PIA of a worker with an $8,000 AIME (the equivalent of a $96,000 annual salary) would be the sum of $612 (90 percent of $680), $1094.40 (32 percent of $3420), and $585 (15 percent of $3,900), for a total of just under $2,292. Because the formula is both wage-based and progressive, the second worker receives a much higher actual benefit than the first worker ($2,292 versus $715), but his benefits are a much lower proportion of his past earnings than the first worker’s benefits (28.6 percent versus 71.4 percent). If a worker retires at the full retirement age, which is currently between ages 65 and 66, and legislated to reach 67 in 2027, this PIA represents the first year’s benefit (although it is adjusted for inflation through a cost-of- living adjustment (COLA)). However, workers can begin receiving reduced benefits at 62; benefits are progressively larger for each month workers postpone drawing them, up to age 70. In general, benefits are actuarially neutral to the Social Security program; that is, the reduction for starting benefits before full retirement age and the credit for starting after full retirement age are such that the total value of benefits received over one’s lifetime is approximately equivalent for the average individual. Those receiving benefits before the full retirement age will also be subject to an earnings test. If earned income is above a certain threshold, Social Security withholds one dollar of benefits for every two dollars of earning above the threshold. Each year, benefits receive a COLA to keep pace with inflation. Similarly to Old Age benefits, disability benefits are determined by calculating a worker’s AIME, applying the progressive PIA formula to it, and then adjusting benefit levels through yearly COLAs (fig. 14). However, because disabled workers are likely to have shorter work histories, their benefits calculation relies on fewer years of earnings. In general, the number of years of earnings used to calculate the AIME is based on the total number of years between when a worker turns 21 and when he applies for DI. If this number of years is 25 or more, a worker’s 5 lowest (or zero) earnings years will be dropped from the calculation. The number of drop-out years gradually declines as a worker applies for disability earlier in life. If the disabled worker is 60 at the time of application, for example, 38 years would have elapsed since age 21. He will receive 5 drop out years, and his AIME will be calculated based upon his 33 highest-earning years. In contrast, if a worker applies for DI at 32, he would have only had 10 elapsed years since age 21, and only be eligible for 2 drop-out years; his AIME would be calculated based upon his top 8 years. At the full retirement age, disabled workers begin receiving retirement benefits, instead of disability benefits; however, benefit levels remain the same and continue to grow through annual COLAs. Spouses: In addition to being eligible to receive retirement benefits on their own earnings records as early as age 62, individuals can also receive dependents’ benefits at age 62, based on their spouse’s benefit amount or, in some cases, that of an ex-spouse (table 5). These individuals can collect these benefits regardless of whether their spouse is concurrently receiving retired or disabled worker benefits. If collection begins at full retirement age, these individuals are eligible for either one-half of their spouse’s benefit amount, or the benefits based on their own earnings record; whichever is greater. As with Old Age benefits, adjustments are made if these individuals chooses to take early retirement. Dependent Children: Dependent children may also qualify for one- half of their retired or disabled parent’s benefit amount. This benefit is available for disabled adult children who are not working on a regular basis, children under age 18, or children still in high school and under age 19. Like other benefits, dependents’ benefits receive annual COLAs. Dependent benefits are subject to a family maximum, whereby a family is limited in the total amount of benefits that can be received from a single individual’s earnings record. The size of the family maximum is currently between 150 percent and 188 percent of the primary beneficiary’s benefit. Widow(er)s may be eligible to receive a one-time death benefit of $255. In addition, widow(er)s, surviving parents, children under the age of 18 (19 if the child is still in school) and disabled adult children can collect benefits off of the deceased person’s earnings record. A widow(er) at full retirement age will receive 100 percent of his or her spouse’s benefits, unless his or her own benefit is higher. Younger widow(er)s (those between age 60 and the full retirement age) can receive between 71 and 99 percent of their deceased spouses’ benefits depending on how close they are to the full retirement age. Furthermore, regardless of age, a widow(er) with young children, can receive 75 percent of the deceased spouse’s benefit. Surviving parents and children can also collect up to 75 percent of their deceased family members’ benefits. All of these benefits receive annual COLA adjustments and are subject to the family maximum. Retirement Security: Women Face Challenges in Ensuring Financial Security in Retirement. GAO-08-105. Washington, D.C.: October 11, 2007. Retirement Decisions: Federal Policies Offer Mixed Signals about When to Retire. GAO-07-753. Washington, D.C.: July 11, 2007. Social Security Reform: Implications of Different Indexing Choice. GAO-06-804. Washington, D.C.: Sept ember 14, 2006. Social Security: Societal Changes Add Challenges to Program Protections. GAO-05-706T. Washington, D.C.: May 17, 2005. Options for Social Security Reform. GAO-05-649R. Washington, D.C.: May 6, 2005. Social Security Reform: Answers to Key Questions. GAO-05-193SP. Washington, D.C.: May 2, 2005. Social Security Reform: Early Action Would Be Prudent. GAO-05-397T. Washington, D.C.: March 9, 2005. Long Term Fiscal Issues: The Need for Social Security Reform. GAO-05-318T. Washington, D.C. February 9, 2005. Social Security: Distribution of Benefits and Taxes Relative to Earnings Level. GAO-04-747. Washington, D.C.: June 15, 2004. Social Security: Program’s Role in Helping Ensure Income Adequacy. GAO-02-62. Washington, D.C.: November 30, 2001. Social Security Reform: Potential Effects on SSA’s Disability Programs and Beneficiaries. GAO-01-35. Washington, D.C.: January 24, 2001. Social Security: Evaluating Reform Proposals. GAO/AIMD/HEHS-00-29. Washington, D.C.: November 4, 1999. Social Security Reform: Implications of Raising the Retirement Age. GAO/HEHS-99-112. Washington, D.C.: August 27, 1999. Social Security: Issues in Comparing Rates of Return with Market Investments. GAO/HEHS-99-110. Washington, D.C.: August 5, 1999. Social Security: Criteria for Evaluating Social Security Reform Proposals. GA0/T-HEHS-99-94. Washington, D.C.: March 25, 1999.
Many recent Social Security reform proposals to improve program solvency include elements that would reduce benefits currently scheduled for future recipients. To date, debate has focused primarily on the potential impact on retirees, with less attention to the effects on other Social Security recipients, such as disabled workers and dependents. As these beneficiaries may have fewer alternative sources of income than traditional retirees, there has been interest in considering various options to protect the benefits of disabled workers and certain dependents. This report examines (1) how certain elements of Social Security reform proposals could affect disability and dependent benefits, (2) options for protecting these benefits and how they might affect disabled workers and dependents, and (3) how protecting benefits could affect the Social Security program. To conduct this study, GAO used a microsimulation model to simulate benefits under various reform scenarios. GAO also interviewed experts and reviewed various reform plans, current literature, and GAO's past work. We considered several reform elements that could improve Social Security Trust Fund solvency by reducing the initial benefits received or the growth of individual benefits over time. According to our simulations, these reform elements would reduce median lifetime benefits for disabled workers by up to 27 percent and dependents by up to 30 percent of currently scheduled levels. While the size of the benefit reduction could vary across individuals, it could be substantial for the vast majority of these beneficiaries, depending upon the reform element. Options for protecting the benefits of disabled workers and dependents from the impact of reform elements include, among others, a partial exemption, whereby currently scheduled benefits are maintained until retirement age. For example, while simulations showed that one reform element could decrease median lifetime benefits of disabled workers to about 89 percent of currently scheduled levels, a partial exemption could restore them to about 96 percent. Further, these protections could be more targeted. For example, a larger cost of living adjustment would result in more rapid benefit growth for those disabled workers who receive benefits for a prolonged period of time. Some protections for dependent benefits could be targeted to a single group of dependents, such as widows, while others could affect multiple groups. For example, increasing the maximum benefit a family can receive could protect a wider group of beneficiaries, including children and spouses of disabled workers, and disabled adult children. While it may be desirable to protect the benefits of disabled workers and certain dependents, such protections would come at a cost to Social Security. Protecting benefits could lessen the impact that a reform element would have on solvency. In addition, such protections could create incentives to apply for Disability Insurance, if disability benefits remained stable while retirement benefits were reduced.
The Postal Service’s goal is to deliver at least 95 percent of local First-Class Mail overnight and to achieve 100-percent customer satisfaction. Delivery performance is measured in 96 metropolitan areas across the nation and results are published quarterly. This measurement system, known as the External First-Class Measurement System (EXFC), is based on test mailings done by Price Waterhouse. For the Washington, D.C., metropolitan area, EXFC results are available separately for Washington, D.C.; Northern Virginia; and Southern Maryland. Nationwide averages are also available for comparison purposes. Customer satisfaction is measured in 170 metropolitan areas across the nation, and results are also published quarterly. This measurement system, known as the Customer Satisfaction Index (CSI), is administered by the Opinion Research Corporation. Each quarter it mails a questionnaire to thousands of households asking them how they would rate their overall satisfaction with the Postal Service’s mail service. For the Washington, D.C., metropolitan area, CSI results are available separately for Washington, D.C.; Northern Virginia; Southern Maryland; and Suburban Maryland. The processing and distribution facility for Southern Maryland is located in Prince George’s County. The facility for Suburban Maryland is located in Montgomery County. Nationwide averages are also available for comparison purposes. The Postal Service said that its delivery service and customer satisfaction goals—nationwide and locally—are ambitious, and attaining those goals will require a high level of employee commitment. For example, the quarter 4, 1994, EXFC nationwide average was 12 percentage points below the established goal. To gauge employee attitudes and satisfaction levels, the Service has administered a questionnaire to all employees in each of the last 3 years. This questionnaire is commonly known as the Employee Opinion Survey (EOS), and survey results are available for the nation, broken down by local postal facility. In conducting our review, we (1) obtained and analyzed numerous Postal Service reports containing data on factors affecting mail processing and delivery; (2) obtained and analyzed numerous types of performance data for both the local Washington, D.C., area and the nation, as well as for other selected locations; (3) interviewed various postal and union officials; (4) observed mail processing operations at local processing and distribution centers and local postal stations; and (5) examined recent reports on mail service issued by the Postal Service’s Inspection Service and the Surveys and Investigations Staff of the House Committee on Appropriations. (Additional background information and more details on our objectives, scope, and methodology are presented in appendix I.) Mail service and customer satisfaction in the Washington, D.C., metropolitan area have consistently been below stated goals; generally below the national average; and, in 1994, substantially below the levels attained in 1993. Specifically, service in the Washington metropolitan area, as measured quarterly by EXFC, has been below the national average in 16 of the 17 quarters since EXFC was first established in 1990. The national average ranged between 79 and 84 percent in that time period but has always been below the 95-percent on-time delivery goal. Figure 1 compares mail delivery service in the Washington metropolitan area, over time, with the national average and delivery service goal. Further analysis of EXFC data showed that delivery scores in the Washington, D.C., metropolitan area have been among the worst in the nation. For example, 88 percent of the time, service in Northern Virginia and Southern Maryland was in the bottom 25 percent of all locations where service was measured; 76 percent of the time, service in Washington, D.C., was in the bottom 25 percent. Additionally, delivery service scores in the Washington, D.C., metropolitan area for quarter 4, 1994, were significantly below the scores attained for quarter 4 the previous year. Southern Maryland’s score, for example, dropped 8 percentage points. Residential customer satisfaction in much of the Washington, D.C., metropolitan area, as measured by CSI, has generally been below the national average. (See figure 2.) Since 1991, the Opinion Research Corporation has sent CSI questionnaires to postal customers on a quarterly basis asking them how satisfied they were with mail service. Information collected during these 16 quarters show that in each quarter between 85 and 89 percent of customers nationwide rated their satisfaction with the Service’s overall performance as excellent, very good, or good. In 12 of 16 quarters, Northern Virginia customers reported being as satisfied, or more satisfied, than the nation as a whole. Customer satisfaction in the other locations that make up the metropolitan area—Southern Maryland; Washington, D.C.; and Suburban Maryland—was lower. For example, Washington, D.C., customers rated the Postal Service lower than the national average in all 16 quarters. Further analysis of CSI scores showed that customer satisfaction was lower in all parts of the Washington, D.C., metropolitan area during quarter 4 of fiscal year 1994 than during comparable periods in 1991, 1992, and 1993. (A detailed discussion of mail service conditions in the Washington, D.C., metropolitan area is presented in appendix II.) Mail service in the Washington, D.C., metropolitan area is poor for a number of reasons, including (1) the Postal Service’s inability to effectively deal with the unexpected growth in mail volume, (2) mail handling process problems, and (3) labor-management problems. Over the past few months, the Postal Service has initiated additional actions in each of these areas in an effort to improve mail service. In 1994, the percentage increase in the amount of mail delivered in the Washington, D.C., metropolitan area was twice the national average. According to Postal Service officials, the Postal Service had not anticipated this growth and was unprepared to process and deliver the increased volume of mail. Complicating the situation were several factors that worked against the Postal Service. First, according to Postal officials, local processing and delivery units experienced staffing problems because more craft people than expected accepted a retirement incentive (buyout) of up to 6 months’ salary and left the Service during the 1992 restructuring. Also, staffing ceilings were put into place in anticipation of more automation equipment. These events, according to Postal officials, left the delivery units with too few people to handle the increased volume of mail. Additionally, the processing units were operating with too many unskilled, temporary employees who had been hired to replace more costly career employees who retired in 1992. Training also became an issue when some new supervisors were placed in jobs where they were not familiar with the work of the employees they were supervising. After considerable attention was focused on these problems in the spring of 1994, the Postal Service took steps to hire new, permanent employees and strengthen training for supervisors and craft personnel. Second, to focus additional attention on customer service, separate lines of reporting authority were established for mail processing and mail delivery functions under the Executive Vice President/Chief Operating Officer during the 1992 restructuring. This realignment of responsibilities was done as part of the Postmaster General’s broad strategy to make the Postal Service more competitive, accountable, and credible. This action left no single individual with the responsibility and authority to coordinate and integrate the mail processing and delivery functions at the operating levels of the organization. The primary focus of each of the function managers was to fulfill the responsibilities of his or her function. Working with the other function managers became a secondary concern. Consequently, because critical decisions affecting both mail processing and customer services could not be made by one individual at the operating level of the organization, coordination problems developed. In June 1994, the Postmaster General moved responsibility for processing and delivery down to the Area Vice President level, and on January 10, 1995, postal officials announced plans for establishing a position under the Mid-Atlantic Area Vice President that would be responsible for overseeing all processing and delivery functions in the Washington, D.C., metropolitan area and Baltimore area. Time slippages in the automation program was another factor that affected the Postal Service’s ability to handle the increased volume of mail. More mail than planned had to be processed manually or on mechanical letter-sorting machines. The Postal Service had expected that by 1995 almost all letter mail would be barcoded by either the Postal Service or mailers and be processed on automated equipment. However, automation fell behind schedule in 1993-1994. The new projected date for barcoding all letter mail has slipped to the end of 1997. (A detailed discussion of the Postal Service’s inability to respond effectively to the unexpected mail volume growth in the Washington, D.C., metropolitan area is presented in appendix III.) Delivery service in the Washington metropolitan area was also adversely influenced by various mail handling process problems, including (1) the unnecessary duplicative handling of much mail addressed to Northern Virginia, (2) overnight service areas that managers believed were geographically too large, (3) mail arriving too late for normal processing, (4) the absence of a control system for routinely pinpointing the specific causes of delays in specific pieces or batches of mail, and (5) failure of employees to follow prescribed processing procedures. The Postal Service has taken action to address, at least in part, each of these problems. Some of the more significant actions taken include (1) reducing the amount of mail handled by more than one processing facility in Northern Virginia, (2) processing more mail at local facilities rather than transporting it to distant processing and distribution centers, (3) working with the large mailers to get them to mail earlier in the day and give advance notice when mailing unusually large volumes, (4) taking the first steps to develop a system that can pinpoint causes of delayed mail, and (5) requiring greater adherence to established operating procedures. Additionally, a number of service improvement teams are continuing to examine mail handling processes in an effort to identify other areas needing improvement. Examples provided by local postal officials that most clearly illustrate problems affecting the local area are discussed below. Duplicative mail handling: Much mail sent to the Northern Virginia area was delayed because it was processed by both the Dulles and Merrifield facilities. Further delays also occurred because of the time lost transporting mail between the two facilities. Duplicative mail handling occurred because the Dulles and Merrifield facilities are jointly responsible for certain ZIP Code service areas and most facilities sending mail to Northern Virginia did not separate the mail between the two facilities. There is no easy way to split up the service areas between the two facilities geographically—it would require realigning and changing some ZIP Codes. That option had not been vigorously pursued because of the adverse reaction from customers anticipated by the Service. However, the Postal Service recently began working with major feeders of overnight mail to work out an interim solution—i.e., the feeder facilities are to sort mail more completely before sending it to the Merrifield and Dulles facilities. Additionally, the Postal Service, in commenting on a draft of this report, said that it will be installing a Remote Bar Coding System site at the Dulles processing and distribution center (P&DC) that, along with other processing changes, will virtually eliminate the need for duplicative handling of mail for some Northern Virginia ZIP Codes. Overnight service areas that are too large: Consistent overnight delivery service in some parts of the Washington, D.C., metropolitan area is difficult to achieve because some service areas may be too large for the current collection, transportation, and delivery network. For example, mail from some of the outlying areas in the service area—e.g., Leonardtown and California, Maryland—does not arrive at the Southern Maryland processing facility until 10:00 or 11:00 p.m. This severely compresses the amount of time available for processing the mail and getting it back out to the post offices in time for delivery the next day. To address this problem, the Postal Service plans to process mail from Leonardtown and California, in addition to other Southern Maryland areas, at a closer facility in Waldorf (Charles County), Maryland. Additionally, the Postal Service is installing more “local only” collection boxes, which should reduce the amount of mail that has to be transported to distant processing and distribution centers. Mail arriving too late for timely processing: Large quantities of mail are frequently entered into the mail stream significantly past the times established for normal processing. This would not be a problem, however, were it not for the expectation that deliveries would be made the next day. Managers told us they have few options other than to accept late-arriving mail and then rush to meet dispatch times. They said that to do otherwise would upset the delicate balance between providing customer service and meeting established time schedules. To help establish a more orderly workflow, the Postal Service has been actively working with large mailers in the area to get them to mail earlier in the day and also to notify the Postal Service ahead of time when large mailings are expected to arrive. (A detailed discussion of all five mail handling process problems and corrective actions taken is presented in appendix IV.) In addition to academic studies, EOS, EXFC, and CSI survey results indicated that a relationship exists between employee attitudes and service performance. Employee attitudes about postal management in most of the facilities in the Washington, D.C., area, like employee attitudes in many other big cities, were in the bottom 25 percent of units nationwide. Similarly, EXFC and CSI scores for Washington, D.C., and other big cities were also relatively low compared to other areas of the country. Disruptive workforce management problems were more prevalent in the Washington, D.C., metropolitan area than in most other parts of the country. Postal Service data showed that employees in the Washington, D.C., metropolitan area experienced greater than average use of sick leave and a higher-than-normal use of work assignments with limited/light duties for employees who, due to physical restrictions, are unable to perform normal duties. Managers told us that excessive use of sick leave and limited/light duty assignments indicate possible abuse and result in lower productivity. Those managers believed, and EOS tended to support the view, that excessive employee absences and unavailability for regular duties were often the result of substance abuse and poor employee attitudes. EOS data suggested that employees in the Washington, D.C., metropolitan area perceived a greater than average level of substance abuse and had more negative attitudes about postal management than employees in most other locations nationwide. Postal management recognizes that improving employee attitudes and attendance is critical to improving delivery performance and customer satisfaction. However, the Postal Service cannot improve employee attitudes and attendance unilaterally. Successful change will require the support and cooperation of employees and their unions. The need for joint cooperation was pointed out in our recent report on Postal Service labor-management relations. The Postmaster General has initiated a number of actions to improve this relationship. For example, he recently invited all the parties representing postal employees to attend a national summit and commit to reaching, within 120 days, a framework agreement for addressing labor-management problems. The rural carriers union and the three management associations accepted the invitation. However, the leaders of the three largest postal unions had not accepted as of December 31, 1994. They said they would wait until the current round of contract negotiations is completed before making a decision on the summit. (A detailed discussion of labor-management relations is presented in appendix V.) The Postal Service provided written comments on a draft of this report. It recognized the need to improve service and highlighted its continuing efforts to produce significant improvements in customers’ satisfaction with their mail service. The Postal Service said that it was continuing to move ahead with numerous improvements in the area’s mail processing and distribution centers. For example, it cited the installation of the Remote Bar Coding System site at the Dulles P&DC to help resolve the duplicative handling of some mail addressed to Northern Virginia. It also cited efforts to begin processing more mail at the Waldorf (Charles County), Maryland facility in order to improve service in Southern Maryland. Additionally, the Postal Service said that it was looking into diagnostic technologies as a means of improving its ability to identify underlying causes of delayed mail. The Postal Service said that new supervisors are receiving the training they need, and that the Service is continuing to hire more letter carriers and mail handlers and to place them where they are most needed. The Postal Service further said that through the outstanding work of thousands of dedicated employees, it was turning the corner in providing quality service in the Washington, D.C., metropolitan area. It said that the actions taken are beginning to produce results and cited, as an example, the improved EXFC scores attained during the first quarter of 1995. The Postal Service agreed with our conclusion that improving labor-management relations is a key element in any long-term solution to mail service problems. It said that efforts in this area must include correcting problems that arise from a collective bargaining process that is not working. Further, it said that postal unions and postal management must work together to change this process. Where appropriate, the Postal Service’s comments have been incorporated into the text of this report. Its comments, in total, are included as appendix VI. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will distribute copies of the report to the Postmaster General, other House and Senate postal oversight committees, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VII. If you have any questions about the report, please call me on (202) 512-8387. In fiscal year 1994, the Postal Service delivered about 177 billion pieces of mail nationwide. About 94 billion, or 53 percent, was First-Class Mail. Revenues for all classes of mail totaled about $50 billion in fiscal year 1994. Revenue from First-Class Mail totaled about $29.4 billion—approximately 59 percent of total revenue. The Postal Service field organization comprises 10 service areas. The Mid-Atlantic Area provides service to the Washington, D.C., metropolitan area and surrounding states. (See figure I.1.) Suburban Maryland Washington, DC Northern Virginia (Merrifield) The Mid-Atlantic area is subdivided into nine performance clusters. The Northern Virginia and Capital performance clusters provide mail service for the Washington, D.C., metropolitan area. The Northern Virginia cluster consists of two mail processing and distribution centers (P&DC), one of which is at Merrifield, Virginia, and one at Dulles International Airport; and the Northern Virginia customer service district. The Capital cluster consists of three P&DCs, with one each in Capitol Heights, Maryland (Southern Maryland); Gaithersburg, Maryland (Suburban Maryland); and Brentwood (Washington, D.C.); as well as the Capital customer service district. The Mid-Atlantic Area Vice President is responsible for day-to-day management of the Mid-Atlantic Area. Efficient collection, processing, and transportation of mail are critical to timely mail delivery and customer satisfaction. Most processing is done at P&DCs, which (1) distribute most local mail to post offices for delivery and (2) dispatch nonlocal mail to other postal facilities for further sorting and distribution. The types of mail processing operations include (1) high-speed processing on automated equipment, (2) mechanized processing on letter sorting machines, and (3) manual sorting. Automated processing is the most efficient of the three methods, and its use is increasing as more automated equipment is installed. The Postal Service’s goal is to deliver at least 95 percent of its First-Class Mail within the following timeframes: (1) overnight for First-Class Mail originating (being sent) and destinating (being received) within the local delivery area defined by the Postal Service; (2) 2 days (generally) for First-Class Mail traveling outside the local area, but within 600 miles; and (3) 3 days for all other domestic First-Class Mail. Nationwide, during the fourth quarter of fiscal year 1994, the Postal Service delivered about 83 percent of its overnight mail, 74 percent of its 2-day mail, and 79 percent of its 3-day mail within established delivery standards. The Postal Service has for several years sponsored measurement systems—the External First-Class Measurement System (EXFC), the Customer Satisfaction Index (CSI), and the Employee Opinion Survey (EOS)—that have allowed assessments of its delivery performance, as well as of customer and employee satisfaction. The Service uses information from these systems to identify areas needing improvement and also publishes summary data that the Service and public can use to hold management and employees accountable for Postal Service performance. Our objectives were to (1) document the recent history of on-time mail delivery service problems for overnight First-Class Mail in the Washington, D.C., metropolitan area; (2) determine the reasons why mail service was below the desired level; and (3) identify any Postal Service actions to improve service. We did not review the Postal Service’s delivery performance for First-Class Mail outside the local service area or for other mail classes (i.e., Express, second-, third-, and fourth-class). The Washington, D.C., metropolitan area, as used in this report, includes the Northern Virginia and Capital clusters. To accomplish our objectives, we obtained and analyzed numerous Postal Service reports containing data on factors affecting mail processing and delivery. We examined national and local Postal Service workhour reports, financial reports, and “FLASH” reports. FLASH reports provide, among other things, detailed information on overtime, mail volume, the number of addresses where mail can be delivered, sick leave usage, limited duty workhours, and the number of hours spent on training. The reports generally covered 4-week accounting periods for fiscal years 1991 through 1994. They included information for the nation, as well as for the Northern Virginia cluster, the Capital cluster, and the units included in these two clusters. Because of changes in accounting and reporting in fiscal year 1993, we did not use 1993 data below the cluster level. We also obtained and analyzed numerous types of performance data for the local Washington, D.C., area and for the nation, as well as for other judgmentally selected locations. These data included delivery service scores as measured by the Postal Service’s EXFC measurement system, customer satisfaction scores as measured by CSI, and employee opinions as determined by EOS. These data covered fiscal years 1991 through 1994, except for EOS, which was conducted in 1992, 1993, and 1994. In 1992, we reported that CSI was a statistically valid survey of residential customer satisfaction with the quality of service provided by the Postal Service. We have not evaluated the validity of the EXFC and EOS survey. We interviewed (1) the Chief Operating Officer/Executive Vice President of the Postal Service; (2) the Vice President of the Mid-Atlantic Service Area; (3) the customer service managers for the Northern Virginia and Capital clusters; (4) the plant managers at Merrifield, Brentwood, and Capitol Heights; (5) Inspection Service officials responsible for audits of postal operations; and (6) various other program and operations officials at headquarters, the Mid-Atlantic area office, local P&DCs, and local delivery units. We also discussed the causes of mail delivery problems with representatives from the National Association of Letter Carriers and the American Postal Workers Union. Additionally, we observed mail processing operations at local P&DCs and local postal delivery units. We also obtained and analyzed documentation on initiatives to improve service in the Washington, D.C., metropolitan area, although we did not evaluate the effectiveness of those initiatives. We also reviewed recent reports on mail service issued by the Inspection Service and the Surveys and Investigations Staff of the House Committee on Appropriations. We requested comments on a draft of this report from the Postal Service. Written comments were received and are discussed on page 11 and included as appendix VI. We did our work from September 1994 to December 1994 in the Washington, D.C., metropolitan area in accordance with generally accepted government auditing standards. The Postal Service’s goal is to deliver 95 percent of the mail on time as measured by EXFC and to achieve 100-percent customer satisfaction as measured by CSI. To date, however, the Postal Service has fallen considerably short of those goals, both nationally and in the Washington, D.C., metropolitan area. EXFC data show that mail delivery service in the Washington, D.C., area has consistently been among the worst in the nation. EXFC is administered under contract by Price Waterhouse and measures delivery time between the scheduled pickup of mail at collection boxes or post offices and the receipt of that mail in the home or business. EXFC test mailings are done in 96 metropolitan areas across the country. Results are published quarterly for overnight First-Class Mail. Within the Washington metropolitan area, EXFC delivery scores are available for Northern Virginia, Southern Maryland, and Washington, D.C. Since EXFC was first established in 1990, delivery scores for overnight First-Class Mail in the Washington, D.C., metropolitan area have, except for the first quarter reported (fourth quarter of 1990), been below the national average, and the national average has always been below the performance goal established by the Postal Service. (See figure 1.) Our further analysis of EXFC scores showed that mail service in the Washington, D.C., metropolitan area was not only below the national average, but also was generally among the worst in the nation. As shown in table II.1, Northern Virginia, Southern Maryland, and Washington, D.C., frequently ranked in the bottom 25 percent of the metropolitan areas where delivery performance was measured. Often, these locations were in the bottom 10 percent. D.C. EXFC data also showed that Washington metropolitan area delivery service in fiscal year 1994 was generally below the levels of service provided in fiscal years 1991 through 1993. (See figure II.1.) Northern Virginia was the exception. Delivery service in Northern Virginia was better in fiscal year 1994 than it was in 1991 and 1992, but not as good as it was in fiscal year 1993. Washington, D.C. EXFC scores can be affected by the performance of neighboring P&DCs. For example, mail originating in Southern Maryland and going to the District of Columbia passes through the Southern Maryland P&DC and the Washington, D.C., P&DC (the destinating facility). The time taken is reflected in Washington, D.C.’s EXFC score, even though it may have been delayed because of a problem at the Southern Maryland P&DC. Because of the impact other locations may have on individual EXFC scores, we obtained and compared the test scores for “turnaround” mail in Northern Virginia, Southern Maryland, and Washington, D.C., with the published EXFC scores for each of the three locations where service is measured in the Washington area. Table II.2 shows that delivery scores for turnaround mail were higher than the published EXFC scores, but still below the 95-percent delivery performance standard. Washington, D.C. Customer satisfaction with mail service, as measured by CSI, varied among residents in Northern Virginia, Suburban Maryland, Southern Maryland, and Washington, D.C. In fiscal year 1991, the Postal Service developed and implemented CSI to track residential customer satisfaction. CSI is administered under contract by Opinion Research Corporation. Each quarter since it was implemented, the contractor has mailed a questionnaire to thousands of households throughout the nation asking them how they would rate their overall satisfaction with the Postal Service’s performance (poor/fair/good/very good/excellent). The Postal Service publicly discloses quarterly overall satisfaction ratings for 170 metropolitan areas, as well as the nationwide average. The Postal Service began reporting quarterly CSI scores in the first quarter of fiscal year 1991 for 40 metropolitan areas. Since then, the survey has been expanded to 170 locations. Results from the first survey showed that, nationally, 87 percent of customers thought the Postal Service’s overall performance was excellent, very good, or good. Since then, quarterly scores have ranged between 85 and 89 percent. The CSI score for quarter 4, 1994, was 85 percent. Among the 170 locations surveyed, customer satisfaction scores are reported for four locations in the Washington, D.C., metropolitan area: Northern Virginia, Suburban Maryland, Southern Maryland, and Washington, D.C. Of these locations, as shown in figure 2, residents of Northern Virginia gave the highest satisfaction rating on the overall performance of the Postal Service. In 12 of the 16 quarters since the Postal Service began reporting CSI scores, Northern Virginia’s scores equalled or exceeded the national average. However, in 3 of the last 4 quarters reported, satisfaction decreased, with scores falling 1 to 3 percentage points below the national average. Suburban Maryland’s postal customers were less satisfied. In 9 of the 16 quarters since the Postal Service began reporting CSI scores, Suburban Maryland’s scores fell below the national average. Customer satisfaction in Suburban Maryland decreased in the last 4 quarters—dropping from 90 percent in quarter 4, 1993, to 80 percent in quarter 4, 1994. Southern Maryland postal customers have been less satisfied than Northern Virginia and Suburban Maryland customers. In fact, Southern Maryland’s score fell below the national average in 13 of the 16 quarters since quarter 1, 1991. Of the four local areas with CSI scores comprising the Washington, D.C., metropolitan area, Washington, D.C., itself has been rated lowest on overall performance. In all 16 quarters since the Postal Service began reporting CSI scores, Washington, D.C.’s scores were lower than the national average. In addition, its scores, like most others, began to drop in quarter 4, 1993. Further analysis of CSI data showed that customer satisfaction in Washington, D.C.; Southern Maryland; Suburban Maryland; and Northern Virginia was lower in quarter 4, 1994, than it was in quarter 4 of any of the preceding 3 fiscal years. (See figure II.2.) Washington, D.C. Postal officials cited the unexpected growth in mail volume in 1994 as one of the principal causes of the breakdown of delivery service in the Washington, D.C., metropolitan area. They said the Postal Service was unable to respond to the unanticipated growth in volume because (1) local delivery units had numerous unfilled vacancies and the workforce at the processing and distribution centers comprised many unskilled, temporary employees; and (2) an organizational change had weakened management control over the span of processing and delivery activities. Timely processing and delivery of the mail were further complicated because employee complement ceilings had been put into place in anticipation of automation. However, automation fell behind schedule in 1993 and 1994. Postal officials cited an unanticipated heavy mail volume in 1994 as one of the principal causes for the slip in service performance, both nationally and locally. Nationally, mail volume grew by about 6 billion pieces between 1993 and 1994—a 3.5-percent increase. Mail volume data, in number of pieces, were not available below the national level. At the local delivery unit level, mail volume is measured in feet. This measure, referred to as city delivery volume feet (CDVF), reflects the amount of mail delivered by carriers. The data showed that the rate of increase in the amount of mail delivered by carriers in the Northern Virginia and Capital performance clusters was about twice the rate of increase experienced nationwide. (See table III.1) Postal Service officials said they had not anticipated that much growth in volume either nationally or locally. Furthermore, they believed that any 1994 increase in volume could be handled without increasing the workforce size because the deployment of additional automated equipment would make processing and delivery more efficient. In retrospect, however, the Postal Service officials said that staffing was inadequate and that automation was able to handle only about half of the volume increase. According to Postal Service officials, a shortage of trained employees contributed to poor mail service in the Washington, D.C., metropolitan area. The shortage resulted from the loss of skilled employees during the restructuring and buyout, hiring decisions based on an unrealistic automation schedule, and some inadequately trained supervisors. The Postal Service lost many skilled craft employees as a result of the 1992 restructuring and buyout. Nationally, 16,882 clerks, 11,933 city carriers, and 2,346 mail handlers took the buyout—about 5.8 percent of all employees in this group. Additionally, more than 16,000 other employees also left the Service. In the Washington, D.C., area, 1,165 craft employees took the buyout—about 6.6 percent of the craft employees in the local area. Employees in the Washington, D.C., area who took the buyout had an average length of service of about 27 years. In testimony before the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations, the Postmaster General said that in looking back at the 1992 restructuring, the Postal Service “let a few too many people go, and . . . cut too deeply in some functional areas.” In planning the 1992 restructuring, the Postal Service had intended to eliminate approximately 30,000 overhead positions that were not involved in mail processing or delivery. However, the Postmaster General wanted to avoid a reduction-in-force, so he extended the buyout offer to clerks, carriers, mail handlers, postmasters, and others in order to open up vacancies for employees whose overhead positions were eliminated but who were either not eligible or did not want to retire. Consequently, more than 47,000 employees opted for the special retirement incentives offered in the Fall of 1992. This number was greater than the Postal Service had expected. However, officials viewed the loss as an opportunity to hire less costly noncareer employees—who could later be terminated more easily than career employees as more automation was moved into place. As the downsizing/restructuring got under way in the fall of 1992, Members of Congress, mailers, and employee groups expressed considerable concern about a possible adverse impact on mail delivery service. However, when compared to the same periods the previous year, service nationwide and in Southern Maryland remained stable and even showed signs of improvement immediately following the restructuring. EXFC scores for Washington, D.C., and Northern Virginia, on the other hand, fell immediately following the restructuring in comparison to the scores received during the same period the previous year. By quarter 2, 1994, nationwide scores and scores for Washington, D.C.; Southern Maryland; and Northern Virginia were below the scores received for quarter 2, 1993, and in July 1994, the Vice President of the Mid-Atlantic Area said that staffing had become a significant problem in the Washington, D.C., area. He noted that in December 1993, in preparation for additional automated sorting systems, the Postal Service had put in place employee complement ceilings. As a result of this action, he said, delivery units struggled with unfilled vacancies, and the processing and distribution centers had to rely on a workforce with many unskilled, temporary employees. These problems were confirmed by local Washington, D.C., area postal officials. They said that because of the departure of many experienced carriers, clerks, and supervisors during the restructuring, the Postal Service’s ability to quickly and accurately sort and deliver mail in the Washington, D.C., area was adversely affected. They also agreed with the Vice President of the Mid-Atlantic area that the shortage of career employees resulting from the employee complement ceilings put in place in late 1993, combined with the large number of unskilled, temporary employees, adversely affected their ability to provide accurate, on-time delivery service. Reacting to the staffing problems, the Vice President for the Mid-Atlantic Area said that the Postal Service was placing emphasis on obtaining adequate numbers of employees and making sure they were in the right places at the right time. As of July 1994, the Postal Service had approximately 18,000 craft employees in the Washington, D.C., metropolitan area. Between that time and October 1994, 130 new staff had been hired in Southern Maryland, including 55 letter carriers, 40 clerks, and 35 mail handlers. In Suburban Maryland, the Postal Service had hired 62 new letter carriers and 34 clerks. In Northern Virginia, 300 new employees had been hired, half of whom were letter carriers. In Washington, D.C., 168 letter carriers, 30 clerks, and 31 mail handlers had been hired. Another staffing issue that arose from the restructuring involved a management decision that placed some employees into supervisory positions when they were not familiar with the work of the employees they were supervising. The Postal Service said it did this to avoid relocating employees outside the Washington, D.C., metropolitan area. However, this action raised additional congressional concerns about the adequacy of training for new supervisors. The Postal Service began making changes to its training program after the restructuring and believes that its ability to train people properly, quickly, and economically is being strengthened. For example, postal officials said that the supervisory training program was being revised and a curriculum based on needs assessment was being developed. In commenting on a draft of this report, the Postal Service said that new supervisors are getting the training they need, and that the Service is continuing to hire more letter carriers and mail handlers and to place them where they are most needed. Compounding staffing problems was the delay in expected benefits from automation. The Postal Service had expected that by 1995 virtually all letter mail would be barcoded by either the Postal Service or the mailer. However, in April 1994, it announced that the barcoding goal date had slipped to the end of 1997. Automation increases the efficiency of mail processing by decreasing the volume that has to be sorted by relatively slower and more costly mechanized or manual processing—potentially leading to higher EXFC scores. Mechanized sorting on letter sorting machines, on the other hand, requires operators to memorize difficult sort schemes and key in ZIP Code information. This human intervention results in higher potential for mishandling mail, causing delays. With automated processing, barcoded letters are sorted in high-speed barcode sorters, often to the level of the street address, with limited human intervention. As automation becomes fully deployed, the Postal Service expects most mail to be already sorted by the time it gets to a carrier for delivery. Shortly after taking office in 1992, Postmaster General Runyon began a top-down restructuring of the Postal Service. This was part of a broad strategy to make the Service more competitive, accountable, and credible. One key component of the restructuring was the separation of mail processing and mail delivery at all levels of the organization below the Executive Vice President/Chief Operating Officer of the Postal Service. This action resulted in splitting accountability for processes critical to mail delivery service. The value of separating responsibility for the mail processing function (which takes place primarily at processing and distribution centers) from the mail delivery function (which takes place primarily at local post offices) has been controversial. The separation left no single manager with the responsibility and authority to coordinate and integrate the mail processing and delivery functions in the Washington, D.C., metropolitan area. Each manager’s primary focus became the fulfillment of his or her own individual responsibilities. Working with managers of other functions became secondary. Consequently, critical decisions affecting both mail processing and customer services in the Washington, D.C., area were not being made by one manager at the operating level of the organization. For example, when we visited one post office in Northern Virginia, local postal officials complained that too much unsorted and misrouted mail was routinely sent to local post offices in order to keep the Merrifield P&DC from having a backlog of unprocessed mail. On the day of our visit, these officials showed us a container of misrouted mail from Merrifield that included not only overnight First-Class Mail but also Priority Mail. The Postmaster noted that by the time this mail could be sent back to Merrifield to be correctly sorted, it would be at least 1 day late. Since there was no one manager with jurisdiction over processing and delivery functions in the Washington, D.C., metropolitan area, resolution of conflicts between the two functions could be accomplished only through the direct involvement of the area vice president, who had responsibility for six states and Washington, D.C. The Inspection Service also identified excessive misrouted mail as a significant problem in the Washington, D.C., area in its May 1994 report on mail conditions in the Mid-Atlantic Area. In a December 1994 Inspection Service report, it also cited the split in responsibilities between processing and delivery as a significant problem in the Washington, D.C., area. The report cited the absence of teamwork and cohesiveness among managers. The Inspection Service said that there needs to be a “glue” to hold the managers of the processing and delivery functions together in the Washington, D.C., metropolitan area. Additionally, representatives from the National Association of Letter Carriers and the American Postal Workers Union told us that the split in responsibilities between processing and delivery was a significant contributing factor to poor mail delivery service in the Washington, D.C., metropolitan area. In June 1994, the Postmaster General changed the management structure to increase the levels of teamwork and accountability in the Postal Service. He took this action in response to feedback from Members of Congress, postal customers, and employees regarding the separation of the customer service function and the processing and distribution function that followed the 1992 restructuring. The Postmaster General combined the responsibility for customer service and mail processing and distribution at a lower level in the organization—from the Chief Operating Officer/Executive Vice President to the area office level. Instead of each of the 10 areas having a manager for customer service and another for mail processing and distribution, one overall manager with the rank of Vice President was put in charge of both customer service and mail processing and distribution. On January 10, 1995, the Postal Service made an additional change designed to push accountability farther down in the organization. On that date, postal officials announced plans for establishing a position under the Mid-Atlantic Area Vice President that would oversee all processing and delivery functions in the Washington/Baltimore area. Several mail handling process problems contributed to the poor delivery service in the Washington, D.C., metropolitan area. These problems included (1) the unnecessary duplicative handling of much mail addressed to Northern Virginia, (2) the difficulty of meeting delivery standards in some outlying areas, (3) the arrival of mail too late for processing and delivery the next day, (4) the lack of a system for routinely pinpointing the causes of delays in specific pieces or batches of mail, and (5) the failure to follow established procedures. Mail addressed to two of the seven ZIP Code service areas in Northern Virginia is often processed by both the Merrifield and Dulles processing and distribution centers and is sometimes delayed by the unnecessary additional processing. This duplicative handling occurs because the Merrifield and Dulles centers are jointly responsible for processing mail addressed to the 220 and 221 ZIP Code service areas. This is partly a result of the way ZIP Codes were first assigned within the 220 and 221 delivery service areas. In 1963, when the ZIP Code service areas were first established, the Dulles facility did not exist; therefore, Merrifield was responsible for all of 220 and 221. At that time, postal officials at Merrifield assigned Zip Codes using an alphabetic listing of all post offices in these two service areas. Because the assignments were made alphabetically, there was no clear geographic distinction between the 220 and 221 service areas. Subsequently, in 1992, when the Dulles facility became operational, there was no good way of isolating either the 220 or 221 service area for processing at Dulles. Therefore, both facilities assumed joint responsibility for processing mail addressed to 220 and 221. In 1991, however, a plan was approved at the headquarters staff level to restructure the ZIP Codes in these two service areas, but top management did not approve that plan because of concerns over reactions from postal customers about ZIP Code changes. Depending on the originating point and predetermined routing schedules, mail addressed to 220 or 221 is to go to either the Merrifield or Dulles centers for processing. The receiving center is to sort the mail to identify the mail that is to be delivered within its service area and then dispatch the remaining mail to the other center for further processing. Postal officials said this procedure results in excessive transportation between the two facilities and duplicative sorting, which can also translate into delayed mail. Postal officials were unable to say precisely how much mail was subjected to this duplicative processing but said it involved substantial quantities. As a partial solution to the problem of duplicative mail handling in the Northern Virginia area, the Postal Service has begun asking the primary feeders of overnight mail to Northern Virginia to sort that mail to a 5-digit level and transport it to the appropriate center in Northern Virginia for further processing. The Postal Service expects this change to reduce the duplicative handling of mail between the two centers, but it places more processing work on the other facilities. The Postal Service, in commenting on a draft of this report, said that it will be installing a Remote Bar Coding System site at the Dulles P&DC that it said will virtually eliminate the need for duplicative handling of mail for some Northern Virginia ZIP Codes. Plant managers at the Southern Maryland and Northern Virginia P&DCs believe that consistent overnight delivery is difficult to achieve in certain outlying areas. They believe an extensive 1990 effort to revise delivery standards and establish more realistic overnight delivery service areas did not go far enough. The plant manager at the Southern Maryland P&DC, in particular, believes that he has an excessively large overnight delivery service area, which he believes has an adverse impact on his EXFC scores. In 1990, in an effort to provide better mail delivery service by improving the Postal Service’s ability to consistently deliver mail within the standards, the Postal Service changed 6,389 (44 percent) of its 14,578 overnight delivery areas nationwide to 2-day service areas. Although this change relaxed the delivery standards for some areas, standards for other areas were unchanged. Two areas in Southern Maryland that were cited by the plant manager at the Southern Maryland P&DC as examples of outlying locations where overnight deliveries were not relaxed and are, at best, challenging are Leonardtown and California, Maryland. Mail from both of these locations is processed at the Southern Maryland processing and distribution center. The plant manager at Southern Maryland said mail from Leonardtown and California often does not arrive at the Southern Maryland center for processing until 10:00 or 11:00 p.m. He said the post offices were unable to get the mail to him earlier in the day because the carriers were often making deliveries and picking up mail until late in the evening. He said that because of the time required to process the mail through the facility, it is difficult to get the mail back out to Leonardtown and California in time for delivery the next day. Partly to address the delivery problem to outlying areas, the Postal Service is planning to process mail from Leonardtown and California, in addition to other Southern Maryland areas, at a facility in Waldorf (Charles County), Maryland, which is closer to Leonardtown and California. The Postal Service believes that by decentralizing processing it will be better able to serve the Southern Maryland mailing public and provide more reliable, consistent service. In addition, to improve mail flow, the Postal Service is installing more “local only” collection boxes in high-traffic locations throughout the Washington, D.C., area. The ZIP Codes covered by that service are to be clearly displayed on the collection boxes. Customers using these boxes should receive overnight service because that mail will not leave the local area for processing. Mail also arrived late at area P&DCs for reasons other than the size of the service area. Each P&DC has established an operating plan specifying critical entry times for receipt of mail in order to meet established clearance and dispatch times at the P&DC. However, area plant managers told us that large quantities of mail, from mailers and other postal facilities, frequently arrived past the critical entry times. This compressed the amount of time that P&DCs had available for processing the mail. The area managers said they have few options other than to accept the mail and then rush to meet their clearance and dispatch times. They feel that to do otherwise would upset the delicate balance between providing customer service and meeting established time schedules. The Inspection Service identified mail arriving late at P&DC centers as one of the major contributors to delayed mail. The Inspection Service also reported that other delays occurred because bulk business mail was sometimes worked out of sequence—i.e., the latest arriving mail was being worked first instead of last. Postal officials at the Southern Maryland P&DC said local mailers routinely deposited large amounts of bulk business mail on their docks late in the day and expected deliveries to be made the next day. To better plan for and manage its workload, Postal Service officials said customer service representatives were more actively working with major mailers in the area to get them to mail earlier in the day and also notify the Postal Service ahead of time when large mailings are expected to arrive. Additionally, some of the mail processing that was being done at P&DCs is now being shifted to local post offices. Postal Service officials believe this will expedite mail distribution to carriers and improve service to customers. As of December 31, 1994, the Postal Service did not have a system that could be used to examine delayed mail and pinpoint where, in the processing and delivery stream, the mail fell behind schedule. Without being able to pinpoint problems in the mailstream, the Postal Service is forced to react to the effects of delivery problems on customer service instead of taking timely steps to avoid or reduce late deliveries. The Postal Service has nearly 40,000 post offices, stations, and branches that collect and deliver over 570 million pieces of mail daily. Between collection and delivery, mail is transported, sorted, and delivered by over 700,000 employees working in or out of over 349 mail processing and distribution facilities. A First-Class letter traveling from coast to coast passes through a myriad of mail processing, transportation, and delivery operations. Mail typically moves between processing steps in a distribution facility, or among facilities, in batches carried in large mail containers. The Postal Service has systems that use barcoding or other forms of automated identification of containers to assist in the control and movement of containers. However, these systems are not designed to provide operational data on a comprehensive basis that allow the Postal Service to track each mail container through the entire processing and distribution cycle. Consequently, postal management cannot track First-Class Mail that was delayed and gather related data to promptly determine when, where, and why it fell behind schedule. One floor supervisor at the Brentwood processing facility in Washington, D.C., explained the implications of this weakness. He said that any postal employee can examine a container of mail at any point in the processing and delivery cycle and determine whether that mail is on schedule. This is possible because each P&DC has an operating plan establishing “windows” for receiving, processing, and dispatching mail. Therefore, a mail handler can examine the postmark on a mailpiece, compare it to the mail processing timetable (operating plan), and determine whether or not the mailpiece is delayed. However, if the mailpiece is delayed, the critical factors that cannot be determined are when, where, and why the mailpiece fell behind schedule. In other words, there is no “history” of the mailpiece (or container of mailpieces) that would pinpoint breakdowns in the mailstream and allow the Service to take corrective actions to prevent future slowdowns. For example, at Southern Maryland, we noticed mail waiting to be processed that should already have been delivered. The supervisor in charge was unable to tell us if that mail was delayed before it arrived at Southern Maryland or became delayed somewhere within the plant, nor could he tell us why it was delayed. Without a diagnostic tool for tracking delayed mail to the source of the problem, corrective actions can be made only to the extent that breakdowns in the mailstream are significant enough to either become conspicuous to postal managers—such as large volumes of mail being consistently late from a particular facility—or cause EXFC or CSI ratings to drop. Although the Postal Service has not yet developed a system that can review the history of delayed mailpieces to identify points and causes of delays, it has taken steps to try to identify systemwide problems that could cause mail delays. For example, Postal Headquarters has set up a National Operations Management Center that allows officials to monitor mail flow across the nation and respond to performance problems and changing customer needs. Management also reports that it is identifying “pinch points,” which slow mail in the postal network, and rerouting mail when the need arises. Postal officials recognize the need for a capability to track delayed mail. They said that since most letters and flats are now barcoded, a logical next step would be the handling of batches of mail under some form of computer-assisted tracking and control system. According to Postal technicians, since all mail moves between processing steps in a distribution center, or among centers, in batches carried in some form of container, it is possible to identify those containers and their contents with a machine-readable code that would enable computer-based systems to monitor their movements. Accordingly, the Postal Service is developing a program for the automated identification and tracking of single high-value mailpieces or batches of mail in containers. This program, known as the Unit-Load Tracking Architecture (ULTRA), is still in an early formative stage and may take years to develop and implement. Under the ULTRA system, unique codes would be applied to letters, parcels, sacks, trays, and containers that would allow the Postal Service to track the units through the postal system. This comprehensive system could allow definitive identification of the points and causes of processing and delivery delays. In commenting on a draft of this report, the Postal Service said that it was also looking into other diagnostic technologies as means of improving its ability to identify underlying causes of delayed mail. Over the past few months, the Inspection Service reported many instances where failure to follow established mail processing procedures contributed to delays. Many instances have been identified where mail was not picked up from collection boxes; various types of mail were commingled in the same container, causing double handling and reduced cancelling efficiency; color codes designating delivery dates were not used or were used improperly; and inaccurate reports were prepared on mail conditions. For example, the Washington, D.C., P&DC was not placing color codes on a large volume of its mail. This led to mail being worked out of sequence and sometimes delayed. The Inspection Service also identified improper color coding as a significant problem in the delivery units. The Inspection Service reported that significant progress has been made in following established procedures for collecting, separating, color coding, and properly reporting on mail conditions. According to Postal officials, these actions are being accomplished primarily through increased training and reminders to employees of the need to adhere to established procedures. In December 1994, several service improvement teams were in place. These teams comprised both craft and management employees from a variety of functions. A major part of the teams’ work is to examine mail flow processes and identify other weaknesses that may be contributing to late mail. Despite the potential benefits of operational changes, long-term improvements in delivery service will require labor and management to work together toward a common goal of continually improving customer service. Fundamental changes must occur in labor relations in order to increase employee commitment and reduce the conflicts between labor and management that currently exist. This is particularly true in the Washington, D.C., metropolitan area. Workforce management problems that were disruptive to mail handling operations have occurred more frequently in the Washington, D.C., metropolitan area than in most other parts of the country. Improving employee commitment is one of the Postmaster General’s corporate goals. In a recent study of labor relations, we found a negative labor climate that did not foster employee commitment. Our reportdisclosed that labor-management relations problems persist on the factory floor of postal facilities. A negative labor climate can impair both productivity and product quality. A number of studies have documented that there is a relationship between employees’ attitudes and performance. One of the most prevalent workforce management problems in the Washington, D.C., metropolitan area was running mail handling operations without a full complement of workers. Often, employees were unexpectedly absent or otherwise unavailable to do their normal work assignments. Unexpected absences often involved the use of sick leave. Employees can also be unavailable for their regular work if they have been injured or are otherwise considered by their physician to be medically incapable of performing normal duties. Some managers said that unusually high usage of sick leave and limited/light duty indicated possible abuse. Managers also said, and the EOS tends to support, that excessive employee absences and unavailability for regular duties are often brought about by substance abuse or poor employee attitudes. Postal Service data showed that employees in the Washington, D.C., metropolitan area experienced greater than average use of sick leave and a higher than normal use of limited duty and light duty work assignments. The EOS also suggested a greater than average level of perceived substance abuse. In addition, the EOS index suggested that Washington, D.C., area employee attitudes about postal management ranked among the lowest in the country. Figure V.1 shows that sick leave usage from 1992 through 1994 for the Northern Virginia and Capital clusters was higher than the national average. The Northern Virginia sick leave usage rates, expressed as a percentage of total workhours, were 3.27, 3.11, and 3.29 during the period, while the Capital cluster rates were 3.56, 3.31, and 3.62, respectively. These usage rates were greater than the national averages, which were 3.22, 3.01, and 3.13 for the period. As figure V.2 shows, limited/light duty hours as a percent of total workhours were about twice the national average in the Capital cluster and about one and one-quarter times the national average in the Northern Virginia cluster. The EOS responses suggested that many employees believed there were substance abuse problems (alcohol and drugs) in the Postal Service, which could have caused attendance problems and poor employee performance. Locally, as shown in figure V.3, a higher than average percentage of employees in the Southern Maryland; Washington, D.C.; Merrifield, Virginia; and Suburban Maryland P&DCs believed alcohol abuse was a problem where they work. Postal Service employees also perceived drug abuse as a problem in the Washington, D.C., area, as shown in figure V.4. None of the local P&DCs reported lower than average perceptions of drug abuse. Employees in delivery units generally perceived that substance abuse was much less of a problem than did employees in the P&DCs. Employee attitudes can be a factor in the level of employee commitment. One measure of employee attitudes is the EOS Index—the average favorable response on 20 employee opinion survey questions. These questions deal with how managers and supervisors treat employees; respond to their problems, complaints, and ideas; and deal with poor performance and recognize good performance. As table V.1 shows, the postal workforce in the Washington, D.C., metropolitan area gave local management relatively low marks, placing most of the units in the area in the bottom 25 percent of all units nationwide. Customer Service (post offices) Washington, D.C., P&DC Southern Maryland P&DC Suburban Maryland P&DC Customer Service (post offices) The Washington, D.C., area was not unlike other large, urban areas with regard to the relationship between low employee morale and low service scores. As table V.2 shows, the EOS Index scores for most units in nine other large urban areas that we judgmentally selected for comparison purposes ranked in the bottom half of all units nationwide. Like the EOS Index scores, the EXFC and CSI scores for these nine big cities also were relatively low compared to scores in other areas of the country. Figures V.5 through V.7 show that EXFC scores for most of the nine cities have usually fallen below the national average. Figures V.8 through V.10 show that CSI scores for eight of the nine cities have also usually fallen below the national average. We recently reported, and the Postal Service has acknowledged, that improving labor-management relations is a long-term proposition. In our recently issued report on labor-management relations, we recommended that the Postal Service, the unions, and management associations develop a long-term agreement (at least 10 years) for changing the workroom climate for both processing and delivery functions. Postal Service efforts to address problems in Chicago illustrate that breakthrough improvements require a long-term effort. Responding to our 1990 letter highlighting our observations on the need for mail delivery service improvements in Chicago, the Postmaster General developed a plan for improving service. Four years later, service in Chicago remained poor. Chicago has a long history of low EXFC scores, and in early 1994 attention was again focused on its mail delivery service problems. About 40,000 pieces of undelivered mail were found in a letter carrier’s truck parked outside a post office in Chicago. The oldest envelopes bore postmarks from December 1993. A month later the Chicago police discovered more than 100 pounds of burning mail beneath a viaduct on the Chicago South Side. That same day, another 20,000 pieces of undelivered mail—some up to 15 years old—were found behind the home of a retired carrier in southwest Chicago. When CSI quantified the level of customer dissatisfaction, Chicago ranked last 15 of the 16 times the survey has been conducted. The Postmaster General reacted by creating a 27-member Chicago Improvement Task Force to identify and correct service problems. The Postal Service reported a number of corrective actions instituted by the task force that were designed to improve mail delivery service. Similar to the situation in Washington, D.C., the task force found operations problems as well as problems with the attitudes of employees. Despite the task force’s corrective actions, Chicago has not made breakthrough improvement. Although there has been greater on-time performance, reduced delayed mail, fewer complaints, and less waiting time in line, Chicago’s EXFC performance for quarter 4, 1994, remained 6 points below its score in the same quarter in the prior year and 12 points below the national average. Customer satisfaction also remained poor at 51 percent. Operations improvements are vital, but they will not solve all delivery service problems. Short-term gains through operational improvements may eventually succumb to the obstacle to permanent improvement—namely, a negative labor climate. Long-term improvements require substantive improvements in labor-management relations. Since taking office in July 1992, the Postmaster General has been working to forge a labor-management partnership to change the culture in the Postal Service. His goal is to shift the Postal Service culture from one that is “operation driven, cost driven, authoritarian, and risk averse” to one that is “success-oriented, people oriented, and customer driven.” We previously reported that the Postmaster General developed a labor-management partnership through the National Leadership Team structure, held regular leadership meetings that included all Postal Service officers and the national presidents of the unions and management associations, and changed the management reward systems to encourage teamwork and organizational success. However, as we also previously reported, there is no overall agreement among the unions and management for change at the field operations level. They have been unable to come to terms on a clear framework or long-term strategy for ensuring that first-line supervisors and employees at processing plants and post offices buy into renewed organizational values and principles. In his November 30, 1994, statement before the Subcommittee on Federal Service, Post Office, and Civil Service, Senate Committee on Governmental Affairs, the Postmaster General testified that the Postal Service supports our September 1994 report recommendations calling for the Service, unions, and management associations to develop a long-term agreement on objectives and approaches for demonstrating improvements in the work climate of both processing and delivery operations. At the hearing, he proposed that the Leadership Team form a task force made up of leaders of the unions and management associations and key postal vice presidents. Mr. Runyon said the task force should have a 120-day agenda “to explore [GAO’s] recommendations, set up pilot projects, and move forward now to accelerate change in our corporate attitudes and culture.” While his labor-management summit proposal received the support of the rural carriers and the three management associations, the leaders of the three largest postal unions have not yet agreed to the summit. They said they are waiting until the current round of contract negotiations is completed before making a decision on the summit. Michael E. Motley, Associate Director James T. Campbell, Assistant Director Lawrence R. Keller, Evaluator-in-Charge Roger L. Lively, Senior Evaluator Charles F. Wicker, Senior Evaluator Lillie J. Collins, Evaluator Kenneth E. John, Senior Social Science Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed mail delivery service in the Washington, D.C. metropolitan area, focusing on the: (1) recent on-time delivery service problems for overnight first-class mail; (2) reasons why mail service is below the desired level; and (3) United States Postal Service's (USPS) actions to improve service in the area. GAO found that: (1) mail service in the Washington, D.C. area has declined in part due to an unexpected increase in mail volume; (2) labor-management relations in the Washington, D.C. area are among the worst in the country; (3) customer satisfaction in the area declined dramatically in 1994 because local units were unable to maintain mail service at previous levels due to employee shortages, poor labor-management relations, and the recent organizational change; (4) USPS has taken action to address unnecessary duplicate mail handling; (5) postal managers believe that metropolitan overnight delivery areas are too large for the current delivery network; and (6) USPS agreed with the conclusion that the key element in improving mail service is to improve labor-management relations.
Providers and beneficiaries may appeal any denied claim. Claims are denied for a variety of reasons. In fiscal year 2001, the most common reason for denying claims was that the services provided were determined not to have been medically necessary for the beneficiaries. Other reasons for denials include that Medicare did not cover the services, or that the beneficiary was not eligible for services. Claims that do not meet the requirements outlined in Medicare statutes and federal regulations may be denied. In addition, denials may be issued for claims that are inconsistent with CMS’s national coverage determinations (NCD) and carrier-based policies, including local medical review policies (LMRP), local coverage determinations (LCD), and other carrier instructions. Relatively few denied claims are ever appealed, and only a small fraction is appealed to the highest level. (App. II contains more information regarding the denial of claims, including common reasons for denials.) The Medicare Part B appeals process consists of four levels of administrative appeals performed by three appeals bodies. Medicare carriers are responsible for the first two levels of appeal—the carrier review and the carrier hearing. Through a memorandum of understanding (MOU) implemented in March 1995—when SSA was separated from HHS and became an independent agency—OHA’s administrative law judges (ALJ) within SSA continue to hear the third level of appeal. OHA’s continued role in Medicare appeals is uncertain, as SSA officials have indicated that they plan to discontinue adjudicating Medicare appeals and expect to transfer the workload to HHS. However, until an agreement between SSA and HHS is reached, OHA will continue to adjudicate Medicare appeals. The MAC adjudicates appeals at the fourth level of the administrative appeals process. In addition, appellants who have had their appeals denied at all four levels of the administrative appeals process have the option of filing their appeals in federal court. Section 521 of BIPA requires numerous administrative and structural changes to the appeals process, including moving the second level of appeals—the carrier hearing—from the Medicare carriers to a group of yet-to-be-established contractors, known as qualified independent contractors (QIC). Figure 1 outlines the steps of the existing appeals process and the process BIPA requires. BIPA’s changes to the appeals process were to apply with respect to initial determinations—that is, claims denials—made on or after October 1, 2002. Although CMS published a rule on October 7, 2002, the ruling implemented only two of BIPA’s provisions—revising the deadline for filing an appeal to the carrier review level and reducing the dollar threshold for filing an appeal at the OHA level. The October 7th rule outlines the criteria used to select the changes that would be immediately implemented; among the criteria is that the provision can be implemented using existing CMS resources. CMS published a proposed rule for complete implementation of BIPA-mandated changes on November 15, 2002, but the final rule has not been issued. As of June 2003, the appeals process is generally operating in accordance with regulations established prior to BIPA’s passage. (See app. III for a comprehensive list of BIPA’s changes.) CMS oversees the carriers who perform - intitial claim determinations and - the carrier appeals functions (hearing and review). BIPA requires that the Secretary of HHS contract with QICs for the new second-level review. SSA oversees OHA. HHS (but not CMS) oversees the MAC. Beneficiaries and providers have the right to appeal denied claims if appeals are filed within the deadline. CMS’s October 2002 ruling implemented the BIPA-mandated deadline for filing an appeal at the carrier review level, shortening it from 180 to 120 days—one of two BIPA provisions implemented thus far. Appeals at the carrier hearing level must be submitted within 180 days of the denial or unfavorable determination. Appellants who are dissatisfied with decisions reached at the carrier hearing level may appeal to OHA and then to the MAC, and their appeals must be filed within 60 days of receiving an unfavorable determination at the previous level. There is no dollar minimum required to file an appeal at the carrier review level. However, an appeal at the carrier hearing and OHA levels must meet specific dollar thresholds of $100 and $500, respectively. To meet the thresholds, multiple denied claims may be aggregated into a single appeals “case.” The MAC does not have a dollar threshold for considering appeals. Finally, appellants who receive unfavorable determinations from the MAC may appeal the decisions in federal court if the amount in dispute is at least $1,000. BIPA provisions change the threshold amounts at the second level of appeal and OHA. When QICs replace carrier hearings as the second level of appeal, the dollar threshold for submitting an appeal at that level will be eliminated. Further, CMS’s October 2002 ruling implemented BIPA’s reduced dollar threshold for filing an appeal at OHA—the second of two BIPA provisions to be implemented thus far—by dropping the threshold from $500 to $100. BIPA also shortened the time frames the appeals bodies have for adjudicating appeals at the first two levels and established time frames for the first time at the higher levels. BIPA’s provisions that revise the timelines for processing appeals have not been implemented, and the appeals bodies are following previously issued performance standards specifying that 95 percent of carrier reviews be completed within 45 days and 90 percent of carrier hearings be completed within 120 days. BIPA required that carrier reviews be completed in 30 days and that the QICs issue their decisions in 30 days. While OHA and the MAC have not previously been bound by time limits, BIPA required that they issue decisions within 90 days of the date an appeal was filed. BIPA also gave appellants the right to escalate their appeals to the next level in the process for adjudication when a decision is not issued within the specified time frame. Escalation is available from any level of appeal except the first—carrier review. However, CMS’s November 2002 proposed rule regarding BIPA’s implementation provides that appellants who escalate their appeals to the next level will, in essence, be waiving their right to a decision within the statutory time frame governing that level. For example, an appeal that is escalated from the OHA to the MAC would not be subject to the 90-day limit that applies to appeals received by the MAC that have not been escalated. The first three levels of appeal share a protocol for adjudication, called de novo review, which permits adjudicators to consider results from earlier decisions but requires them to independently evaluate evidence and issue original decisions. The appeals bodies reexamine the initial claim to determine if it should be paid and consider any new documentation or information supporting the claim that the appellant submitted. The fourth level of review, the MAC, does not share this protocol. Rather than performing de novo review of evidence, it evaluates the appropriateness of OHA decisions and considers whether new evidence submitted will alter the decision. BIPA changes require that the MAC performs de novo review in all cases. The appeals bodies reach decisions through either a review of the file for the initial claim or through hearings. At the first level of appeal, a carrier review officer who was not involved in the initial denial reexamines the initial claim and any new supporting documentation provided by the appellant but does not conduct a hearing. The second level of appeal—the carrier hearing—provides the appellant with an opportunity to participate in a hearing at the carrier’s facility or by telephone. OHA conducts hearings at the third level of review. OHA’s hearings are held at its central office in Falls Church, Virginia, or at one of its 140 local hearing offices nationwide. The MAC’s adjudication is based on a review of OHA’s decisions, and it does not conduct hearings. Appeals bodies have several options when deciding a case. The case may be decided fully or partially in favor of the appellant and payment awarded for all or part of the claim or claims in dispute. Alternatively, the decision may be unfavorable to the appellant and the initial denial of payment upheld. The MAC has an additional option of remanding the appeal— returning it to the OHA judge who issued the original decision—for a variety of reasons. For example, the MAC may determine that more evidence is needed, additional action by OHA is warranted, or that OHA should issue a modified decision based on the MAC’s instructions. Finally, the MAC may deny an appellant’s request for review if it finds that OHA’s decision is factually and legally adequate. In making a determination regarding whether the claim is payable or will continue to be denied, the first two levels of appeal are bound by the same guidance used in the initial denial determination—Medicare statutes, federal regulations, CMS’s NCDs, the carrier’s own LMRPs and LCDs, and, pursuant to carrier’s contracts with CMS, CMS’s general instructions, such as manuals and program memoranda. The statutes, regulations, and NCDs also bind OHA and the MAC—and the QICs, when they are established. But QICs, OHA, and the MAC only need to consider—rather than definitively follow—the carrier-based LMRPs and LCDs in rendering their decisions. Management of the Medicare appeals process is currently divided among CMS, SSA, and the MAC. CMS is charged with establishing procedures for carriers to follow in considering appeals—including developing guidelines for timeliness and quality of communications with the appellant—and is also responsible for ensuring that the carrier review and carrier hearing processes comply with statutory and regulatory requirements. SSA establishes its own requirements and procedures, with input from CMS, for OHA’s review of third-level appeals. CMS reimburses OHA for its appeals work. The MAC independently establishes its own procedures and guidelines for completing Medicare appeals. Carriers generally meet CMS’s existing time frames for processing appeals, but all appeals bodies—the carriers, OHA, and the MAC—fall far short of meeting BIPA’s time frames. The large backlog of pending cases at OHA and the MAC, combined with BIPA’s escalation provision and the requirement for de novo review at the MAC, will demand a level of performance that the appeals bodies have not demonstrated they can meet. Administrative delays, caused by inefficiencies such as difficulties in transferring and locating files and outdated technology, constitute a large portion of time spent in the appeals process—especially at OHA and the MAC. QICs have not yet been implemented and there is insufficient information to predict their ability to meet BIPA’s performance measures. There is a substantial gap between carriers’ current performance and that required by BIPA’s standards. For example, at the first level of appeals— the carrier review—while carriers completed about 91 percent of their reviews within CMS’s current 45-day time frame, this is insufficient by BIPA’s standards. Only about 43 percent of the carrier reviews completed in fiscal year 2001 met BIPA’s mandated 30-day deadline. At the carrier hearing level—eventually to be replaced by the appeals to the QICs—the ability to meet BIPA’s time frames remains largely unanswered because the QICs have not yet been established. Although the carriers exceeded CMS’s performance standards in fiscal year 2001 by completing more than 90 percent of the carrier hearings within 120 days, this standard is much less stringent than the one imposed by BIPA, which requires the QICs to complete all appeals within 30 days. Similarly, OHA and the MAC fall far short of BIPA’s required 90-day time frame for completing 100 percent of their cases. For example, in fiscal year 2001, OHA took an average of 14 months from the date an appeal was filed to complete adjudication. The MAC took even longer to process appeals during the same year, with cases taking an average of 21 months to adjudicate. As of September 2003, OHA and the MAC had not implemented BIPA-mandated time frames and continued to operate without time frames for rendering decisions. Although officials at both appeals bodies told us that they are concerned with meeting BIPA time frames, neither body has developed strategies for doing so. Instead, the officials stated that they would take action once regulations implementing BIPA are finalized and they are more certain how the new regulations will affect them. Existing backlogs of unprocessed cases may also interfere with the appeals bodies’ compliance with BIPA’s mandated time frames for appeals of claims denied after October 2002. While backlogs at the carrier review and carrier hearing levels are relatively small, OHA and the MAC have been unable to meet workload demands. For example, OHA’s backlog at the end of fiscal year 2001 included nearly 35,000 Part B cases—equal to about the average number of cases processed in 7 months. At the end of that same year, the MAC had a backlog of 15,000 cases—twice the number of cases it adjudicated in 2001. The MAC has been making strides to improve its efficiency and, near the end of fiscal year 2003, reported reducing its backlog to 10,100 cases. According to OHA and MAC representatives, BIPA-governed cases—appeals of claims denied after October 1, 2002—will have higher priority than cases filed earlier, virtually ensuring that pre-BIPA cases experience even longer delays. However, as of July 2003, none of the appeals bodies had determined how they would prioritize the processing of BIPA appeals while completing their pre-BIPA workloads. At OHA, protocols for assigning appeals to ALJs may contribute to delays. Although OHA plays a critical role in resolving Medicare appeals, its primary focus is disability appeals for SSA, which constitute 85 percent of its total caseload. While they are a smaller workload, Medicare appeals are often more complex than disability appeals. Some local OHA hearing offices take advantage of their ALJs’ Medicare expertise by assigning all Medicare cases to a single judge. However, other offices assign cases randomly, requiring judges to refamiliarize themselves with basic Medicare statutes each time they hear a Medicare case—potentially prolonging the process. While all of the appeals bodies are subject to BIPA’s processing time frames, the MAC is uniquely challenged in meeting these deadlines because the requirement for de novo review expands the scope of the MAC’s work. MAC officials pointed out that shifting from ensuring that OHA interprets policy correctly to becoming a fact-finding body requires a substantial amount of additional resources and more time to gather and evaluate evidence. MAC officials report that they do not have a strategy to address the expansion in the scope of their work and the contraction in time to render decisions. The bulk of time at OHA and the MAC is spent on assembling files and completing other administrative tasks rather than in performing legal analyses of appeals and adjudicating cases. Each agency takes more than a year, on average, to complete an appeal. For example, OHA spent 14 months, on average, to complete a case in fiscal year 2001 and an average of 10 months of that was consumed obtaining case files from the lower level appeals bodies and performing related processing tasks. In that same year, the MAC adjudicated nearly 7,100 Part B cases and spent about 17 months, on average, performing administrative tasks. As shown in figure 2, on average, over 70 percent of the time to resolve OHA and MAC cases was spent on administrative activities, rather than on substantive legal analysis of the appeals. Officials from both OHA and the MAC report that it may take months to receive appellants’ case files from the previous level of review or the appropriate storage facility. Case files—which are all paper documents— are a critical component of the adjudication process as they contain all evidence submitted by the appellant in previous appeals. The MAC, in particular, requires OHA’s case files to assess the evidence, the hearing tapes, and the letter of decision so that it may determine whether OHA’s decision was appropriate. OHA and the MAC are dependent on the Medicare carriers to forward the appropriate files to their hearing offices for review. CMS allows carriers 21 to 45 days to forward case files to OHA, depending on the number of appellants and dollar value of the case. However, locating files is further complicated by the fact that appellants are required to include little information in their appeal requests. Therefore, OHA and the MAC may receive appeals that do not identify the carrier that originally denied the claim. Locating files can also be hindered if the appeal has been in process for several years and the carrier that initially denied the claim is no longer a Medicare contractor. Although the defunct carrier should have transferred all of its files, including its appeals records, to the replacement carrier, such transitions are not always smooth. Instead, files are often difficult to locate, causing delays in forwarding specific requested cases. The MAC faces an additional challenge in locating case files. OHA- completed cases are routed to a special clearinghouse contractor for temporary storage. If OHA determines that the appellant is due a full or partial payment, the clearinghouse returns the files to the carrier that initially denied the claim so that payment may be processed. If OHA continues to deny payment, the clearinghouse holds the accompanying file for 120 days to expedite the MAC’s retrieval should the appellant continue to appeal. However, the MAC may not know whether to approach the clearinghouse contractor or the relevant carrier to request needed files. And, like the carriers, the clearinghouse does not always provide files in a timely manner. In fiscal year 2001, the MAC waited an average of nearly 3 months—the entire time allowed for the MAC to adjudicate appeals under the BIPA amendments—to receive case files. The MAC, which is empowered to remand, or return, cases to OHA when there is insufficient information in the existing record to issue a decision, in fiscal year 2001 remanded 1,708 cases—nearly a quarter of the cases it adjudicated that year—to OHA because needed files were either missing or incomplete. Although CMS has not performed a comprehensive evaluation of the clearinghouse’s accuracy in routing appeals files, it recently determined that the clearinghouse had a 10 percent error rate in routing case files to particular carriers for payment. Inadequate technology and the need for manual processing also indicate that the appeals bodies are not prepared to address BIPA’s requirements. For example, providers often aggregate groups of claims for different beneficiaries to meet the dollar threshold for filing an OHA appeal. To maintain beneficiary confidentiality, a separate electronic file—containing the same provider information—is created for each beneficiary. While widely available technology allows the creation of multiple data files by entering the information one time and then quickly duplicating it, OHA’s system requires administrative staff to separately enter repetitive information pertaining to each denied claim that constitutes the appeal. For example, if a provider is appealing a similar group of claims in a single appeal, OHA must nonetheless create a separate case file and data record for each beneficiary. BIPA provides that appellants may escalate their appeals from the QIC or OHA to the next level in the administrative appeals process when it is not resolved within the time frames mandated. MAC cases not meeting the time frame may be escalated to the federal district court. More than 95 percent of OHA appeals and about 85 percent of MAC appeals did not meet BIPA time frames in fiscal year 2001, suggesting that a number of cases would be eligible for escalation. However, escalation may not ensure that appellants secure timely adjudication. Escalated cases will lack comprehensive records because the prior level of appeal did not complete the cases and may not have the full collection of case documentation. OHA and MAC officials report that cases without complete records from earlier levels of appeal will require the next level to perform time-consuming research. The MAC may remand cases with incomplete files, causing additional time to be spent locating and transferring files between the appeals bodies. While appellants may view the consideration and resolution of their appeals as a single process, several separate and uncoordinated bodies are responsible for administering the various appeals levels. The appeals bodies have traditionally worked independently; however, close coordination is critical to successful planning for BIPA changes. Further, appeals bodies lack the management data to track cases and analyze case characteristics, preventing them from identifying barriers to efficiency—a first step in streamlining the process. Planning for BIPA implementation has also been hampered by (1) proposed regulations that have not been finalized, (2) the uncertainty of funding amounts for implementation, and (3) unresolved details regarding the possible transfer of OHA’s appeals workload to HHS. CMS, OHA, and the MAC—located within two federal agencies—are each responsible for administering a portion of the appeals process. However, neither the agencies nor the appeals bodies have the authority to manage the entire process. The appeals bodies focus primarily on their individual priorities, which may differ and complicate planning for making improvements to the process as a whole. Attempts to modernize the appeals process have been undermined when individual appeals bodies have identified opportunities for improvement, but have failed to sufficiently take into account the impact of their plans on the other bodies. For example, CMS issued a draft statement of work (SOW) outlining the expectations for QICs—the BIPA-mandated replacement for the workload of Medicare carriers at the second level of review, the carrier hearing. The draft SOW asks potential QIC applicants whether they have the capacity to convert paper case files into an electronic format, with the expectation that this would ease the transfer of needed files to the higher levels of appeals. However, CMS officials told us that they did not consult with OHA to ensure that it would have the capacity to use and store electronic files. OHA officials agree that electronic files offer an important opportunity to reduce lost files, speed transfers, and permit case tracking. However, OHA has focused its own plans to implement a system of electronic folders—scheduled for January 2004—exclusively on its SSA disability cases. Recent planning for BIPA implementation intensified the need for appeals bodies to work together because the demanding time requirements alone call for a more efficient appeals process. While officials from CMS, OHA, and the MAC worked together to develop the proposed rule for implementing the majority of BIPA’s requirements, the agencies have not taken the opportunity to coordinate strategies to meet the time frames mandated by the act. We found that the appeals bodies are not sufficiently coordinated to track an appealed claim, or group of claims, through all four levels of the process. This is attributable, in part, to the use of different numbering systems for case identification at each appeals body and the fact that the individual claims making up a “case” can change at every level. For example, appeals bodies often reconfigure cases to group claims with similar issues. Appellants also change the configuration of their cases by aggregating their claims to meet minimum dollar thresholds necessary to file an appeal at a given level. Case numbering is further complicated when a partially favorable decision is made. In these situations, some of the claims within the appeal are paid, while the remaining denied claims are eligible for further appeal by beneficiaries and providers and subject to further reconfiguration with new case numbers. Accordingly, assigning a variety of numbers to any particular claim or group of claims at each level of the process makes it virtually impossible to track an individual claim from one level to the next. Some problems with data quality are also a product of a lack of coordination between appeals bodies. CMS, OHA, and the MAC are making individual efforts to improve their data systems to better manage their caseloads, but their systems remain incompatible. For example, although CMS is gradually shifting its carriers to one common claims processing data system—also used to track appeals at the carrier level—it is not compatible with OHA’s or the MAC’s data systems. OHA has also initiated data system improvements, but did not consult with CMS in setting the parameters for new system requirements or provide CMS’s appeals group with a copy of its planning document. The MAC does not know if the improvements it is instituting—such as its transition to more powerful data management software used to organize its caseload—will be compatible with OHA’s, CMS’s, or the carriers’ systems. Compatible data systems would facilitate the transfer of case information between appeals levels and analyses of the process as a whole. Not only do appeals bodies have incompatible data systems, but data gathered individually by CMS from carriers and by OHA from local hearing offices are aggregated and not used to pinpoint problems and develop solutions to improve the appeals process. For example, CMS only collects workload data from its carriers in the form of monthly productivity totals. OHA collects aggregate data from each of its 140 hearing offices, despite the fact that the local offices are tracking individual cases. The aggregate numbers allow OHA and CMS to develop basic workload statistics, such as the number of cases they resolve and the average time frames for adjudication. However, the data do not allow CMS and OHA to perform more detailed analyses, such as isolating process steps that create a bottleneck or identifying specific cases that linger at an appeals level for unusually lengthy periods. The lack of specific data on case characteristics also limits the appeals bodies’ understanding of the nature and types of appeals that they must resolve. For example, only the MAC collects data on the reason for the appeal, the type of denial being appealed, and the amount in controversy; however, the MAC is not consistent in ensuring that the information is routinely entered in the database. Furthermore, carriers do not collect data that allow CMS to distinguish if the appellant is a beneficiary or a provider, and none of the appeals bodies collects information on the rates of appeal among provider specialty groups. Analyses of case characteristic data could be valuable in identifying confusing or complex policies or requirements that lead to denied claims and the submission of appeals. The data would also be useful to the agencies in understanding the nature of denied claims that are appealed at each level and guiding more appropriate initial reviews of claims and educating providers about proper claim submission. BIPA mandated the use of QICs to replace the second appeals level and required them to develop management information through a data system that would identify (1) the types of claims that give rise to appeals, (2) issues that could benefit from provider education, and (3) situations that suggest the need for changes in national or local coverage policy. QICs must report their information to the Secretary of HHS and, among other things, must monitor appeals decisions to ensure consistency between similar appeals. However, the requirements do not affect data collection at the other appeals bodies. As a result, without corresponding changes at the other appeals bodies, it will remain difficult to evaluate the performance of the appeals process as a whole and make informed decisions affecting more than one appeals level. CMS stated that it plans to expand the QICs’ data system to the third level of appeal—the ALJ-adjudicated level—and, eventually, to all levels of appeal. Until the compatible data systems are in place at all appeals bodies—which CMS plans for 2005—the appeals bodies will not be able to perform the most fundamental types of analyses to improve the management of the process. While BIPA mandated several changes to the current appeals process, CMS, OHA and the MAC are charged with developing regulations for implementing BIPA’s mandates in accordance with the Administrative Procedures Act. As of September 2003, guidance regarding two provisions—adjusted deadlines for appellants filing first-level appeals and reduced dollar thresholds required for filing appeals at OHA—have been issued. CMS officials stated that they expect that the proposed regulations implementing the remaining provisions of BIPA section 521 will be finalized by early 2004. The regulations, once finalized, will provide directions specifying how each body will operate. Without final regulations, officials from carriers, OHA, and the MAC said that they have had difficulty estimating what the actual effect on their workloads will be and, accordingly, have not made specific plans to comply with BIPA’s mandates. Even after the regulations are finalized, several important issues will not have been resolved. For example, when it published its ruling on October 7, 2002, CMS acknowledged that transition issues from the current appeals process to the new process would require additional policy guidance prior to implementation. Specifically, questions will remain regarding the necessity of operating two separate appeals processes concurrently, dependent on the date of the initial claim determination. Appeals of claims denied before the effective date of the BIPA amendments are not governed by them, barring specific guidance to the contrary, and are subject to pre- BIPA guidelines and processes. No additional funding was provided to the appeals bodies in fiscal year 2003 to implement BIPA’s changes. Moreover, uncertainties exist about the funds available in fiscal year 2004. The first uncertainty concerns funding for HHS. The President’s proposed budget for fiscal year 2004 includes $126 million in funding for CMS to complete BIPA’s changes— including establishing the QICs, developing the QIC data systems, and implementing the shortened time frames at the first and second appeals levels—as well as assuming the workload currently performed by OHA. However, this funding level was premised on the assumption that BIPA would be amended to reduce the number of QICs, increase the time frames for completing appeals at all levels, and require that providers pay a $50 user fee for filing appeals at QICs. However, as of September 2003, BIPA had not been amended. Moreover, the proposed budget contained no additional funding for the MAC to implement BIPA. The second budgetary uncertainty concerns funding for the third level of the appeals process, currently performed by OHA. While SSA’s fiscal year 2003 budget included a $90 million “direct draw” from the Medicare Trust Fund for Medicare appeals, the proposed 2004 budget eliminates the direct draw and does not include a new source for Medicare appeals funding, reflecting SSA’s plan to transfer OHA’s Medicare appeals workload to HHS. Although BIPA required CMS to establish QICs in time for them to begin adjudicating appeals of claims denied as of October 1, 2002, CMS estimated, in its fiscal year 2004 budget request, that QICs would become operational, at the earliest, February 2005. Agency officials detailed that the implementation of QICs would require approximately 10 months of drafting and finalizing the related regulations and conducting the bidding process, and 6 months for hiring staff, renting space, and performing other tasks associated with making QICs operational, including developing the QICs’ data systems. In commenting on a draft of this report, HHS stated that CMS now plans for QICs to begin operation in fiscal year 2004. However, we were not provided with CMS’s implementation plan or sufficient details to evaluate its feasibility. Finally, one of the critical issues related to BIPA’s implementation involves the possible transfer of the Medicare caseload currently adjudicated by SSA’s OHA to HHS. Several issues remain unresolved. In 1995, when SSA separated from HHS and became an independent agency, SSA entered into an MOU with the Health Care Financing Administration to continue to perform the Medicare appeals work it had been conducting. Recently, SSA has taken the position, which is reflected in its budget request for fiscal year 2004, that it intends for OHA to discontinue adjudicating Medicare appeals and has proposed a revised MOU outlining the transfer of OHA work to HHS. However, as of September 2003, HHS had not signed the revised MOU and the transfer of the workload to HHS had not been finalized. In addition, legislation has been introduced that would expressly provide for the transfer of Medicare appeals to HHS.However, provider and beneficiary groups have protested because they believe shifting responsibility to HHS will compromise the ALJs’ independence. OHA’s departure from the appeals process would create a new challenge for HHS. OHA’s process for adjudicating administrative appeals includes 140 local hearing offices and over 1,000 ALJs. Because SSA disability appeals constitute about 85 percent of OHA’s work, OHA would continue to require the use of its hearing offices and judges regardless of whether it continues to hear Medicare appeals. BIPA language specifies that the third level of appeal be adjudicated by ALJs, but because HHS has far less capacity than OHA to hear ALJ cases, HHS would have to compensate for OHA’s departure by developing plans that would enable it to adjudicate the current workload demands within BIPA’s time frames and to address the backlog of cases accumulated before the transfer to HHS. As of June 2003, CMS was evaluating OHA’s Medicare operations, workload, and facilities and developing and assessing the feasibility of various options. A CMS official stated that assuming OHA’s workload would be a notable challenge for the agency. BIPA demands a level of performance—especially regarding timeliness— that the appeals bodies have not demonstrated they can meet. In addition to lengthy processing times, OHA and the MAC have developed sizable backlogs of unprocessed cases. The backlogs raise a question about how BIPA-governed cases, with their mandated time frames, will be prioritized relative to unresolved cases filed before BIPA’s mandated implementation date. Administrative and systemic inefficiencies, which span all levels of appeals, strongly indicate the need for improvement. Without significant improvements, the appeals bodies will be unable to meet BIPA’s more rigorous performance requirements. Uncertainties regarding BIPA regulations and funding further complicate the challenge the appeals bodies face in implementing BIPA and meeting its requirements. Moreover, the transfer of OHA’s Medicare appeals work from SSA to HHS involves major challenges, and until all of the stakeholders resolve workload and timeliness issues, the full impact of such a transfer will not be known. CMS, its carriers, OHA, and the MAC have traditionally not coordinated their management of the appeals process. Instead, each has operated as though the process consisted of discrete and independent segments. Greater coordination could enable them to resolve the barriers that currently preclude successful management of the appeals process as a whole. Inefficiencies in file transfer and case file tracking, developing comprehensive and meaningful data, and planning for BIPA implementation require a joint effort including each appeals body and its agency. The lack of a single entity that sets priorities and addresses operational problems at all four levels of the process makes it imperative that all bodies work closely together. If OHA’s Medicare appeals workload is to be transferred to HHS, it is critical that all of the current appeals bodies work together to develop a carefully planned transition and build efficiencies to help HHS assume the workload. We believe that the creation of a Medicare appeals process that can consistently address BIPA’s requirements will require a commitment for close coordination from all appeals bodies. We recommend that the Secretary of HHS and the Commissioner of SSA create an interagency steering committee with representatives from CMS, the carriers, OHA, and the MAC to serve as an advisory body to the Secretary of HHS and the Commissioner of SSA with the following responsibilities: make administrative processes, such as file tracking and transfer, compatible across all appeals bodies; negotiate responsibilities and strategies for reducing the backlog of pending cases, especially at OHA and the MAC, and establish the priority for adjudicating pre-BIPA cases relative to BIPA-governed cases; and establish requirements for reporting specific and comparable program and performance data to CMS, SSA, and HHS so that management can identify opportunities for improvement, and determine the resource requirements necessary to ensure that all appeals bodies will be able to meet BIPA’s requirements. We provided a draft of this report to HHS and SSA and received written comments from both agencies. In its comments, HHS emphasized its commitment to implementing the appeals provisions in BIPA and highlighted the steps it has taken to do so. Similarly, SSA emphasized its efforts to provide quality service to Medicare appellants. We have reprinted HHS’s and SSA’s letters in appendixes IV and V, respectively. HHS agreed with our conclusion that a more coordinated approach to the appeals process is needed. HHS said, however, that we understated its progress in this area and described a variety of efforts it has engaged in to facilitate improved coordination between the appeals bodies. As we noted in the draft report, HHS has made strides in enhancing coordination, but we believe that greater progress can be made by creating an interagency steering committee to develop a consolidated and strategic approach to implementing BIPA. SSA’s comments also emphasized the benefits of enhanced coordination between the appeals bodies. It largely attributed the inefficiencies that exist in the current appeals process to the lack of a single entity with ownership of, and accountability for, Medicare appeals. SSA indicated that it believes that HHS is the sole entity with the authority to unify the policies and procedures for the Medicare appeals process. HHS stated that it would consider the appropriateness of an interagency steering committee but did not specifically agree or disagree with our recommendation to create such a body. However, it stated that the transfer of the work performed by SSA’s OHA to HHS is critical to achieving the level of coordination needed to address the inefficiencies outlined in our report. SSA indicated that it generally agreed with the specific responsibilities of the steering committee. It also stated that it believes that HHS has ultimate responsibility for Medicare appeals and that HHS should carry out the functions of the steering committee through CMS. SSA stated that its budget anticipates the transfer of OHA’s appeals workload to HHS, and SSA has submitted a new MOU to HHS to facilitate a smooth transition. While SSA emphasized its commitment to serving Medicare appellants during the expected transition, it also pointed out that Medicare appeals make up a small portion of its work. Therefore, SSA cautioned that while it will participate in efforts to improve the Medicare appeals process, it must consider the demands of its total workload in allocating its resources. While HHS did not specifically comment on our recommendation to make administrative processes, such as file tracking and transfer, compatible across all levels of appeal, SSA agreed that an interagency steering committee could be beneficial in ensuring such compatibility among appeals bodies. SSA also noted that the steering committee would be helpful in defining the roles of the appeals bodies both in their current operating status and during the anticipated transfer of the OHA workload to HHS. Regarding our recommendation to negotiate responsibilities for reducing the backlog of pending cases, HHS agreed that a strategy for setting clear requirements to prioritize pre-BIPA and BIPA cases and reduce the backlog of cases at all levels is needed. HHS also reported that the MAC has already reduced its backlog and we revised the report to reflect the reduction. HHS also said that prioritizing cases and other transition matters would be addressed in the forthcoming final regulations. SSA agreed that strategies for reducing both the backlog of pending cases and the lengthy processing times for Medicare appeals are needed and expressed a willingness to help resolve the backlogs and delays. HHS agreed with our recommendation to establish comparable program and performance data across appeals levels and indicated that improved appeals data capabilities are needed. To that end, HHS noted that it has issued a request for proposals to develop the data system required by BIPA. SSA acknowledged that fragmentation of the appeals process has precluded the development of comparable data. However, SSA pointed out that preparations to transfer OHA’s work to HHS have created a need for greater data sharing. SSA also pledged to work to capture comparable data to facilitate the transfer of the OHA’s work. In addition, in response to HHS’s specific comments, we have clarified that the scope of our work excluded managed care, Medicare revised the use of the word “rule” to “ruling;” entitlement, and overpayment cases, as well as Part B claims processed by durable medical equipment contractors and fiscal intermediaries; defined the term “provider,” as used in this report, to include any nonbeneficiary appellant, including physicians and other suppliers; distinguished between claims that are rejected because they are duplicate modified our description of BIPA’s escalation provision to recognize that or missing information and those that are denied for substantive reasons, in appendix II; revised the legend of figure 1; CMS has developed specific requirements for escalation in its notice of proposed rulemaking; revised our explanation of the MAC’s procedures regarding the parameters for accepting evidence in its current decision-making process and the MAC’s criteria for denying an appellant’s request for review; and added that CMS policy is a binding element in carrier review. However, we did not revise the draft report in response to HHS’s specific comment regarding our use of the word “review.” While BIPA refers to the first level of appeal as “redetermination,” we have used the term “carrier review” because the adjudication process at the review level is unchanged by BIPA. Nor did we make revisions in response to HHS’s specific comment that both OHA and the MAC use their own systems for processing appeals and conduct their own hiring. As we noted in the draft report OHA and the MAC independently establish their own procedures and guidelines. Finally, we did not revise the draft in response to HHS’s specific comment that we imply that the MAC has done no planning related to BIPA requirements. As we noted in the draft report, the MAC has made some improvements, but as MAC officials told us, and as HHS indicated in its comments, a detailed action plan to meet BIPA requirements has not been developed. In its comments, HHS noted that a detailed plan is premature because the MAC will not receive BIPA cases for some time— until after they have passed through the other levels of appeal—however, BIPA requirements apply to claims denied on or after October 1, 2002, and such cases have already been submitted. HHS also provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Secretary of HHS, the Commissioner of SSA, interested congressional committees, and other interested parties. We will then make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (312) 220-7600. An additional GAO contact and other staff who made contributions to this report are listed in appendix VI. Our analyses were limited to the appeals process for denied Part B claims—rather than managed care, Medicare entitlement, and overpayment cases—because Part B cases constitute the majority of appeals. We also excluded Part B claims processed by durable medical equipment contractors and fiscal intermediaries to focus on the work performed by carriers. We reviewed the four levels of the administrative appeals process; our scope did not extend to the federal district court level. To gain a better understanding of the process for Part B appeals at the time the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) was passed and the changes it mandated, we reviewed agency procedures for completing Part B appeals regulations and agreements guiding Medicare appeals and other laws. We also analyzed appeals workload data and interviewed officials at the Centers for Medicare & Medicaid Services (CMS) and at all levels of the administrative appeals process—the carriers, the Office of Hearing and Appeals (OHA), and the Medicare Appeals Council (MAC). We reviewed regulations and procedures pertaining to the initial denials of claims and the submission of appeals by providers and beneficiaries. We also examined the processes for data management and guidelines and regulations for adjudicating cases at all levels. We reviewed the memorandum of understanding between the Health Care Financing Administration and the Social Security Administration, which outlines the responsibilities of both agencies in the adjudication of Medicare appeals. In addition, we reviewed the October 2002 ruling implementing selected BIPA amendments and the proposed rule for the implementation of the balance of the BIPA amendments to the appeals process. We also analyzed appeals data from CMS, four selected carriers, OHA, and the MAC to understand the scope and efficiency of the Medicare appeals process and the characteristics of appeals. All data examined were for cases adjudicated from fiscal years 1996 through 2001, with a primary focus on fiscal year 2001, which represents the conditions that existed at the time BIPA was passed. In reviewing later data and in conversations with the appeals bodies, we confirmed that the conditions reflected in the data are relatively unchanged. Limitations in collected and reported data at each level precluded comprehensive and consistent analyses in some cases. CMS and the MAC alerted us to some limitations in their data, including inconsistency in data entry, changes in data systems that caused the loss of data, and poorly defined variables. At some levels, only aggregated data were available, which did not permit detailed analysis. We studied carrier performance by selecting four carriers located in different regions of the country and obtaining processing data on appeals submitted to those carriers at the first two levels of appeals. We also reviewed the results of CMS’s contractor performance evaluations of carriers’ appeals activities in fiscal years 1999, 2000, and 2001. We visited three OHA local hearing offices located in proximity to three of the four selected carriers’ appeals operation centers to learn more about their role in the appeals process and to assess the impact of carrier performance on their operations. We also examined the processes and procedures used at the OHA local hearing offices. To understand the efficiency of the appeals process, we examined the average total time to process appeals at each level, and the average time spent in each step of the adjudication process at OHA and the MAC. We also examined MAC data to determine the number of cases remanded to OHA because of lost files in fiscal year 2001. Appeals bodies performed analyses of their appeals data at our request. CMS performed analyses of the Contractor Reporting of Operational and Workload Data (CROWD), including the reason for initial claims denials, the time each carrier took to process carrier reviews and carrier hearings, and the number of cases at the first three levels of appeal. CMS analyses of CROWD, OHA analyses of its data, and our analyses of the MAC’s data also provided information on the average time spent in adjudicating appeals and the number of pending cases. OHA’s central facility analyzed its Part B data based on our request, and we analyzed data provided by the MAC to determine the time elapsed between processing milestones at OHA and the MAC. In the analysis of the time spent in the various phases of case processing at the MAC, cases with missing date information or cases with negative dates were omitted. All results of, and methodologies for, our analyses of MAC data were examined and confirmed by the MAC. In fiscal year 2001, carriers processed about 773 million Medicare Part B claims and rejected or denied, in full or in part, about 161 million—or 21 percent—of the claims processed. Many claims are rejected because they are missing information or are duplicates of claims previously processed and paid or denied. In fiscal year 2001, carriers rejected over 19.5 million claims that were missing information and more than 40 million claims that they considered duplicate. Duplicate claims may be submitted for several reasons. For example, inconsistent regulations may confuse providers causing them to resubmit denied Part B claims—even though Medicare rules do not allow this—because Medicare allows denied Part A claims to be resubmitted for payment. Also, turnover in administrative and billing personnel at providers’ offices may result in confusion about whether a claim was previously submitted, and under what circumstances a claim can be resubmitted for payment. According to officials from the Centers for Medicare & Medicaid Services’ (CMS), carrier error also contributes to the rate of duplicate submissions because some carriers have system limitations that do not always recognize appropriate claims. For example, if a claim is submitted that appropriately includes the performance of the same service to two separate limbs, the two distinct services may be construed as duplicate claims by some carrier systems. Claims are denied if they do not meet the requirements in Medicare statutes, federal regulations, or CMS’s national coverage determinations. Carriers may also deny claims based on their own local medical review policies and local coverage determinations, which may enhance or clarify national Medicare policy. CMS compiles data submitted by carriers categorizing the reason for denying claims. Table 1 shows the reasons for denials of Part B claims in fiscal year 2001, excluding rejections. Although CMS has established the categories for data submission shown in table 1, it has not provided strict definitions of these categories for carriers to follow. Instead, each carrier has developed its own unique set of definitions for each category. As a result, these data do not provide a precise or reliable explanation of the reasons for denial. For example, the category “other,” which comprised more than 17 percent of reported Part B denials in fiscal year 2001, may include denials at one carrier that another carrier would have included in another category. Relatively few cases are appealed when compared to the number of denials, and only a small fraction is appealed to the highest level. CMS, the Office of Hearings and Appeals (OHA), and the Medicare Appeals Council (MAC) do not track the number of denied claims that are appealed, although CMS collects the number of claims that are adjudicated in the appeals process for the carrier review, carrier hearing, and OHA levels. In fiscal year 2001, about 7.1 million claims—less than 7 percent of denied Part B claims—were adjudicated at the carrier review level. In that year about 554,000 Part B appeals were adjudicated at the carrier hearing level and over 201,000 at OHA. The MAC received about 8,800 Part B appeals cases in fiscal year 2001; however, the MAC does not track the number of claims comprising cases. Appeals requests at the higher levels have grown rapidly in recent years, as shown in table 2. For example, requests for Medicare appeals at OHA—the third level of appeals—increased a total of 200 percent from fiscal year 1996 to fiscal year 2001, and the MAC’s workload grew by nearly 500 percent from fiscal year 1997 to fiscal year 2001. Ankit Mahadevia, Margaret J. Weber, Anne Welch, and Craig Winslow made major contributions to this report.
Appellants and others have been concerned about the length of time it takes for a decision on the appeal of a denied Medicare claim. In December 2000, the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA), required, among other things, shorter decision time frames. BIPA's provisions related to Medicare appeals were to be applied to claims denied after October 1, 2002, but many of the changes have not yet been implemented. GAO was asked to evaluate whether the current Medicare appeals process is operating consistent with BIPA's requirements and to identify any barriers to meeting the law's requirements. BIPA demands a level of performance, especially regarding timeliness, that the appeals bodies--the contract insurance carriers responsible for the first two levels of appeals, the Social Security Administration's (SSA) Office of Hearings and Appeals (OHA), and the Department of Health and Human Services (HHS) Medicare Appeals Council (MAC)--have not demonstrated they can meet. While the carriers have generally met their pre-BIPA time requirements, in fiscal year 2001, they completed only 43 percent of first level appeals within BIPA's 30-day time frame. In addition to average processing times more than four times longer than that required by BIPA, OHA and the MAC--the two highest levels of appeal--have accumulated sizable backlogs of unresolved cases. Delays in administrative processing due to inefficiencies and incompatibility of their data systems constitute 70 percent of the time spent processing appeals at the OHA and MAC levels. The appeals bodies are housed in two different agencies--HHS and SSA. The lack of a single entity to set priorities and address operational problems--such as incompatible data and administrative systems--at all four levels of the process has precluded successful management of the appeals system as a whole. Uncertainty about funding and a possible transfer of OHA's Medicare appeals workload from OHA to HHS has also complicated the appeals bodies' ability to adequately plan for the future.
DHS and TSA share responsibility for the research, development, and deployment of passenger checkpoint screening technologies. The Aviation and Transportation Security Act established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes the screening of all passengers and property transported to, from, and within the United States by commercial passenger aircraft. Additionally, the Homeland Security Act of 2002 established DHS and, within it, the Science and Technology Directorate for, among other things, conducting research, development, demonstration, and testing and evaluation activities relevant to DHS.DHS’s Science and Technology Directorate is responsible for testing and evaluating aviation security technologies, including AIT systems, at the TSL on behalf of TSA. DHS and TSA conducted five types of tests to evaluate the performance of AIT-ATR systems. Qualification testing. TSL conducted qualification tests in a laboratory setting to evaluate the technology’s capabilities against TSA’s procurement specification and detection standard that specified the required detection rate AIT systems must meet in order to qualify for procurement. Qualification tests evaluate the technology’s detection of threat items that are not artfully concealed as they are in covert tests, but do not test the entire system, including the SO’s interpretation and resolution of alarms. Qualification testing also includes testing of the system’s false alarm rate. For the purposes of this report, we refer to qualification testing as laboratory testing. Operational testing. TSA conducted operational tests that assessed the technology’s detection performance, called threat-inject tests, at airports to evaluate the AIT-ATR systems’ ability to function in an operational environment. Operational testing also assesses how well AIT systems are suited for use in a real-world, aviation checkpoint environment after systems have successfully completed qualification testing in a laboratory setting. For example, operational testing includes determining whether the system interfered with other equipment fielded at the checkpoint and whether the system met TSA’s requirements. Further, DHS’s acquisition policy requires that operational tests be conducted prior to an agency procuring a technology. According to TSA testing documentation, threat- inject tests are not intended to evaluate effectiveness of the entire AIT- ATR system, which includes the technology, the personnel who use the technology, and the processes that govern screening, in an operational setting. Covert testing. TSA’s Office of Inspection and the DHS Office of Inspector General conducted covert tests of AIT-ATR systems at the passenger checkpoint to identify vulnerabilities in TSA’s screening process. According to TSA officials, those tests were intended to identify weaknesses in the technology, the operators who used it, and TSO compliance with SOPs by artfully concealing threat objects intended to simulate a likely terrorist attack. Performance assessments. TSA conducted covert performance assessments of TSO compliance with SOPs, under the Aviation Screening Assessment Program (ASAP), which TSA uses as a standard performance measurement for the Office of Management and Budget. According to TSA officials, ASAP assessments determine SO adherence to TSA’s SOPs and are not intended to test AIT-ATR system capabilities. Checkpoint drills. In accordance with TSA’s IED checkpoint drill operational directive, TSA requires personnel at airports to conduct drills to assess TSO compliance with TSA’s screening SOPs and to train TSOs to better resolve anomalies identified by AIT-ATR systems. TSA conducts those drills at airports using test kits that contain inert bombs, bomb parts, and other threat items. According to TSA officials, IED checkpoint drills assess SO adherence to TSA’s SOPs and are not intended to test AIT-ATR system capabilities. TSA uses a multilayered security strategy aimed to enhance aviation security. Within those layers of security, TSA’s airport passenger checkpoint screening system includes, among other things, (1) screening personnel; (2) SOPs that guide screening processes conducted by TSOs; and (3) technology, such as AIT-ATR systems, used to conduct screening According to TSA, those elements collectively determine of passengers.the effectiveness and efficiency of passenger checkpoint screening. In strengthening one or more elements of its checkpoint screening system, TSA aims to balance its security goals with the need to efficiently process passengers. Passenger screening is a process by which TSOs inspect individuals and their property to deter and prevent an act of violence, such as carrying an explosive, weapon, or other prohibited item onboard an aircraft or into the airport sterile area—in general, an area of an airport for which access is controlled through screening of persons and property. individuals for prohibited items at designated screening locations, referred to as checkpoints, where TSOs use technology and follow SOPs to screen passengers. According to TSA’s SOP for AIT-ATR systems, three TSOs are required to operate lanes equipped with AIT systems: one divestiture officer (of either gender), one male SO, and one female SO. See 49 C.F.R. § 1540.5. As we reported in January 2012, TSA’s requirements for the AIT system have evolved over time. TSA continued to use those revised requirements to determine whether the AIT-ATR system met the agency’s needs. Additionally, TSA used those requirements to evaluate the next generation of AIT systems, referred to as AIT-2. Further, TSA’s requirements for AIT systems are based on tiers that correspond to the relative size of items that the AIT system must identify and requirements that the AIT system must meet, with Tier I being the level currently deployed AIT systems already meet and Tier IV being TSA’s anticipated goal for AIT systems to meet. TSA’s procurement of AIT-2 systems requires vendors to ensure AIT-2 systems meet Tier II requirements and provide faster throughput, among other things. TSA plans to seek proposals from AIT-2 vendors to provide Tier III and Tier IV capabilities by time frames specified in its AIT roadmap. TSA did not initially plan for AIT- IO systems to meet levels beyond Tier III, but included Tier IV in response to our recommendation. TSA does not collect or analyze three types of available information that could be used to enhance the effectiveness of the entire AIT-ATR system. First, TSA does not collect or analyze available airport-level IED checkpoint drill data on SO performance at resolving alarms detected by the AIT-ATR system to identify weaknesses and enhance SO performance at resolving alarms at the checkpoint. Second, TSA is not analyzing AIT-ATR systems’ false alarm rate in the field using data that could help it monitor the number of false alarms that occur on AIT-ATR systems to help monitor the potential impacts that AIT-ATR systems may have on operational costs. Third, TSA assesses the overall AIT-ATR system performance using laboratory test results that do not reflect the combined performance of the technology, the personnel that operate it, and the process that governs AIT-related security operations. TSA does not collect or analyze IED checkpoint drill data, because it does not ensure compliance with its operational directive that requires each airport to conduct IED checkpoint drills each week. Specifically, the operational directive, originally issued in February 2010 and updated in November 2012, requires TSA personnel at airports to conduct a certain number of IED drills per checkpoint lane every week at each airport. The total number of drills per pay period must be split evenly between carry- on baggage and passenger screening. Additionally, for those airports equipped with AIT systems, a certain percentage of on-person drills must be conducted on AIT systems and a certain percentage must be conducted on walk-through metal detectors. TSA is not enforcing compliance with its directive, and as a result, data on SO performance are not being consistently collected or reported by approximately half of airports with AIT-ATR systems. For example, according to TSA data, we found that TSA personnel at almost half of the airports with AIT-IO or AIT-ATR systems did not report any IED checkpoint drill results on those systems from March 2011 through February 2013. Of the airports at which TSA personnel conducted IED checkpoint drills, the number of drills conducted by TSA personnel at airports varied from 1 to 8,645. Further, roughly four-fifths of the on- person IED drills were conducted by screening passengers with metal detectors, with the rest of the IED drills conducted by screening passengers with AIT systems, which did not comply the directive’s specified requirements on the number of drills that must be conducted on each type of technology. According to TSA officials, TSA’s Office of Security Operations is responsible for overseeing compliance with the operational directive at airports, but it does not analyze the IED checkpoint drill data at the headquarters level. Further, TSA officials told us that TSA formerly tracked the number of IED checkpoint drills in a monthly management report for federal security directors, but in fiscal year 2012, that report was replaced by an executive scorecard that tracks each airport’s IED checkpoint drill pass rate but does not include the number of drills conducted. TSA officials stated that federal security directors could conduct very few drills that are easy for SOs to identify in order to achieve a high pass rate, since the details of the drills are not provided to headquarters or analyzed beyond the pass rate. According to TSA officials, the agency does not ensure compliance with the directive at every airport because it is unclear which office within the Office of Security Operations should oversee enforcing the operational directive. According to officials from TSA’s Office of Training and Workforce Engagement, that office had the ability to monitor the program until TSA began using federal security director scorecards in 2012, which are reviewed by the Office of Security Operations. As a result, it is still unclear which office is ultimately responsible for overseeing whether TSA is in compliance with the operational directive at airports. Data on IED checkpoint drills could provide insight into how well SOs resolve anomalies detected by the AIT systems, information that could be used to help strengthen the existing screening process. By not clarifying which office is responsible for overseeing TSA’s IED checkpoint drills operational directive, directing that office to ensure enforcement of the directive in conducting these drills, and analyzing the data, TSA is missing an opportunity to identify any potential weaknesses in the screening process, since performance depends in part on the ability of SOs to accurately resolve anomalies. TSA is not analyzing available data on the number of secondary screening pat-downs that SOs conduct as a result of an AIT-ATR system alarm, which indicates that it has detected an anomaly. Analyzing this information could provide insight into the number of false alarms that occur in the field, which may affect operational costs. Specifically, when the AIT-ATR system identifies the presence of an anomaly, indicated by an alarm, the SO must resolve the anomaly by conducting a pat-down to determine whether the anomaly is a threat item. If the SO does not resolve the anomaly during the pat-down (i.e. by locating an item in the location identified by the AIT-ATR system alarm), this may be attributed to either a false alarm (the AIT-ATR system identified an anomaly when none actually existed) or SO error (the SO did not identify an anomaly that was present). By not analyzing such operational data, TSA is limited in its understanding of the operational effectiveness of deployed AIT-ATR systems. TSA collected information on false alarm rates through laboratory testing conducted at TSL. These laboratory test results demonstrated that AIT- ATR systems have a higher false alarm rate than AIT-IO systems. Our analysis showed that the AIT-ATR system’s false alarm rate can be expected to range significantly based on the estimate’s 95 percent confidence interval, which could have implications for SO performance at resolving alarms and operational costs. Although TSA’s detection standard required AIT-ATR systems to meet a specific false alarm rate, TSL laboratory test results on the AIT-ATR system indicate that certain factors, such as body mass index (BMI) and headgear, such as turbans and wigs, may contribute to greater fluctuations in the false alarm rate, either above or below that threshold. For example, the false alarm rate for passengers with a normal BMI was less than the false alarm rates for overweight and obese passengers. Additionally, the AIT-ATR system had a higher false alarm rate when passengers wore turbans and wigs. While TSA did not include the false alarm rate as a key performance requirement that could be used as a basis to accept or reject AIT-ATR systems, higher false alarm rates could result in higher operational costs. According to TSA, the AIT-ATR systems’ current false alarm rate could produce an increase in annual staffing costs in the field, but it has not conducted studies on this issue. According to DHS’s Science and Technology Directorate, effective checkpoint screening technologies have lower false alarm rates, as well as higher throughput and lower costs of operations, which enhance the effectiveness and efficiency of how TSA screens passengers. TSA’s Functional Requirements Document stated that AIT-ATR systems must have a data collection and reporting system that collects, stores, analyzes, and displays a summary report on the outcomes of scans. The AIT-ATR systems are required to provide, at a minimum, the total number of passengers scanned, total number of passengers on which the system detected anomalies, and the body location of where an anomaly was detected. TSA reported in its System Evaluation Report that the AIT-ATR system was equipped with that data collection and reporting system and the summary report. According to TSA, it verified that currently deployed AIT-ATR systems capture those data in operational testing and evaluation. However, TSA does not collect or analyze those data at headquarters. Rather, TSA gives TSA management at airports the discretion to determine how to use those data and whether to enter those data into TSA’s centralized information management system. TSA officials agreed that collecting and analyzing operational data would provide useful information related to the impact of false alarm rates on operational costs, and collecting those data could be done on a selective basis so that it would not be too labor-intensive. According to TSA officials, TSA is in the process of networking all AIT-ATR systems so that information can be collected at the headquarters level, and when this process is complete, TSA would be able to centrally collect operational data that could provide information on secondary screening outcomes, which provide insight into the operational false alarm rate. TSA officials were not able to provide an estimate of when this will be completed. Given the potential staffing implications associated with a higher false alarm rate, it is important to fully understand the system’s false alarm rate in the field. Without a complete understanding of how the systems perform in the field, TSA may be at risk of incurring significantly higher operational costs than anticipated. Although TSA officials stated that collecting such data could be labor-intensive if not collected selectively, the agency agreed that evaluating operational screening data in the field could provide useful information, and that data could be collected in such a way that it does not negatively affect operations. Standards for Internal Control in the Federal Government calls for agencies to identify, capture, and distribute operational data to determine whether an agency is By not establishing meeting its goals and effectively using resources.protocols that facilitate capturing operational data on passengers at the checkpoint once the AIT-ATR systems are networked together, TSA is unable to determine the extent to which AIT-ATR system false alarm rates affect operational costs and has less information for its decision- making process related to checkpoint screening. According to TSA officials, checkpoint security is a function of technology, people, and the processes that govern them, but TSA does not include measures for each of those factors in determining overall AIT-ATR system performance. TSA evaluated the technology’s performance at meeting certain requirements in the laboratory to determine system effectiveness. Laboratory test results provide important insights but do not accurately reflect how well the technology will perform in the field with actual human operators. Figure 1 illustrates the multiple outcomes of the AIT-ATR screening process. Although TSA conducted operational tests on the AIT-ATR system prior to procurement, TSA does not assess how anomalies are resolved by considering how the technology, people, and processes function collectively as an entire system when determining AIT-ATR system performance. TSA officials agreed that it is important to analyze performance by including an evaluation of the technology, operators, and processes, and stated that TSA is planning to assess the performance of all layers of security. According to TSA, the agency conducted operational tests on the AIT-ATR system, as well as follow-on operational tests as requested by DHS’s Director of Operational Test and Evaluation, but those tests were not ultimately used to assess effectiveness of the operators’ ability to resolve alarms, as stated in DHS’s Director of Operational Test and Evaluation’s letter of assessment on the technology. TSL officials also agreed that qualification testing conducted in a laboratory setting is not always predictive of actual performance at detecting threat items. Further, laboratory testing does not evaluate the performance of SOs in resolving anomalies identified by the AIT-ATR system or TSA’s current processes or deployment strategies. According to best practices related to federal acquisitions, technologies should be demonstrated to work in their intended environment. According to DHS’s Acquisition Directive 102-01 and its associated guidebook, operational testing results should be used to evaluate the degree to which the system meets its requirements and can operate in the real world with real users like SOs. TSL’s Test Management Plan for AIT systems stated that effectiveness must reflect performance under realistic or near-realistic operating conditions. Additionally, a group of experts on testing best practices assembled by the National Academy of Sciences concluded that agencies should include the human element when evaluating system performance. That group of experts also determined that agencies should determine system effectiveness by conducting performance testing in an operational setting in addition to laboratory testing, which could include SOs during testing. TSA conducted operational tests, but it did not use those tests to determine AIT-ATR effectiveness. Instead, TSA used laboratory tests that did not factor in performance of the entire system that includes technology, people, and processes. However, AIT-ATR system effectiveness relies on both the technology’s capability to identify threat items and its operators to resolve those threat items. Given that TSA is seeking to procure AIT-2 systems, DHS and TSA will be hampered in their ability to ensure that future procurements meet mission needs and perform as intended at airports without measuring system effectiveness based on the performance of the AIT-2 technology and SOs who operate the technology, while taking into account current processes and deployment strategies. TSA has enhanced passenger privacy by completing the installation of ATR software upgrades for all deployed AIT systems but could do more to provide enhanced AIT capabilities to meet the agency’s mission needs. Moreover, the agency faces technological challenges in meeting its goals and milestones pertaining to enhancing AIT capabilities. TSA has met milestones as documented in its roadmap pertaining to the installation of ATR software upgrades that were intended to address privacy concerns and improve operational efficiency for all deployed AIT systems in accordance with the statutory deadline included as part of the Federal Aviation Administration Modernization and Reform Act of 2012.However, it did not meet proposed milestones documented in its AIT roadmap to provide enhanced capabilities to meet the agency’s mission needs. For example, the February 2012 AIT roadmap estimated that TSA would complete installation of Tier II ATR software upgrades for currently deployed AIT systems by December 2012. TSA’s updated October 2012 AIT roadmap revised this date to March 2013. According to TSA testing documentation, during operational testing conducted from May through June 2012 at an airport test site, the AIT-ATR Tier II system demonstrated limitations due to noncompliance with certain requirements. Accordingly, TSA decided not to pursue fielding of the Tier II system based on particular deficiencies identified during operational testing. The vendor of this system submitted a new version of the AIT-ATR system for laboratory testing to TSL. In September 2013, the new version had passed laboratory testing and was undergoing operational test and evaluation. As shown in figure 2, TSA began operational test and evaluation for Tier II upgrades 17 months after the expected start date articulated in its October 2012 roadmap. According to TSA, it completed operational test and evaluation in January 2014. According to the timeframes in TSA’s revised roadmap, it would take an additional 7 months from January 2014 to complete Tier II upgrades. However, TSA had estimated that it would provide Tier III capabilities by the end of fiscal year 2014. Although TSA experienced challenges and schedule slippages related to meeting Tier II requirements for the currently deployed AIT systems, in September 2012, TSA made contract awards to purchase and test the next generation of AIT systems (referred to as AIT-2) from three vendors. These systems are required to be equipped with ATR software and must be capable of meeting enhanced requirements (qualified at least at the Tier II level), among other things. The updated October 2012 roadmap contained milestones for testing and acquiring AIT-2 systems, which TSA has not met. Specifically, TSA is about 9 months behind schedule for AIT-2 testing and procurement, as depicted in figure 3. For example, the roadmap indicated that TSA would begin qualification testing and evaluation for AIT-2 during the first quarter of fiscal year 2013, would complete that testing by January 2013, and would complete deployment by March 2014. However, TSA did not initiate qualification testing until July 2013 (about 9 months behind schedule) because all three vendors had difficulty providing qualification data packages verifying that the vendors had met contractual requirements and the systems were ready to begin testing. Accordingly, as of March 2014, TSA is not on track to meet the March 2014 deployment milestone and these efforts have not resulted in enhancing AIT capabilities because currently deployed AIT- ATR systems are qualified at the same Tier I level as the systems originally deployed in 2009. We have reported in the past few years that although AIT systems and the associated software have been in development for over two decades, TSA has faced challenges in developing and meeting program requirements in some of its aviation security programs, including AIT. Best practices for acquisition programs state that when key technologies are immature at the start of development, programs are at higher risk of being unable to deliver on schedule. As we concluded in January 2012, at the start of AIT development, TSA did not fully adhere to DHS acquisition guidance, and procured AIT systems without meeting all key requirements. According to best practices on major acquisitions, realistic program baselines with stable requirements for cost, schedule, and performance are important to delivering capabilities within schedule and cost estimates. In its AIT roadmap, TSA describes the time frames as notional and explains that establishing definitive timelines for reaching defined, additional tiers is difficult to achieve because of intricate dependencies that are outside of the program’s control and may vary by manufacturer. However, TSA officials stated that they did not use available scientific research or evidence to help assess how long it would take to develop enhanced capabilities. In setting these time frames, TSA officials told us that TSA did not seek input from national laboratories that have conducted technology assessments and explosives research on behalf of DHS’s Science and Technology Directorate nor did it evaluate vendor data to determine the capabilities of the technology. According to experts we interviewed from Sandia National Laboratories, to accurately determine realistic time frames in which vendors would be able to provide enhanced capabilities, it would require an evaluation of proprietary vendor data to understand how well the technology can meet requirements at a specific tier level. Rather, according to TSA officials, since TSA did not have access to proprietary data, it relied on notional time frames proposed by the AIT vendors, which comprised estimates for when the vendors expected to be able to develop and deliver AIT systems that would meet TSA’s requirements. TSA’s October 2012 AIT roadmap contains one key element of a technology roadmap—estimated time frames for achieving each milestone—and does not describe steps or activities needed to achieve each milestone. Moreover, in April 2012, the vendor for currently deployed AIT systems provided TSA with a detailed plan for delivering a system that could meet Tier III requirements that contained proposed milestones and time frames for achieving each milestone. Although TSA relied on discussions with this vendor to estimate roadmap time frames, the agency did not incorporate details from the vendor’s plan into its roadmap. According to a representative from this vendor, TSA did not consult with the vendor regarding the risks and limitations of its proposed time frames, including how long it might take to develop various hardware or software modifications, nor did it provide feedback to the vendor after the proposal was submitted. The vendor’s April 2012 plan states that after the Tier II system has met TSA’s requirements, it would take the vendor several years to develop and deliver a Tier III system for TSA to test, followed by an operational test and evaluation system validation phase that would take several months. In addition, according to experts we interviewed from the national laboratories that contributed to the development of imaging technology, the milestones contained in TSA’s October 2012 roadmap are not achievable because it did not reflect the time needed to make sufficient improvements to the technology to ensure that it would be able to meet additional tier levels. TSA did not incorporate available information from the national laboratories and vendors into its updated roadmap. As a result, the roadmap underestimated the length of time it would take to develop and deploy AIT-ATR Tier III systems.discussed later in this report, moving forward, it will be important for TSA to incorporate scientific evidence and information from DHS’s Science and Technology Directorate, and the national laboratories, as well as nonproprietary information and data provided by vendors into the next revision of its AIT roadmap to ensure that the time frames for achieving future goals and milestones are realistic and achievable. Consistent with the Homeland Security Act of 2002, as amended, the DHS Science and Technology Directorate has responsibility for coordinating and integrating the research, development, demonstration, testing, and evaluation activities of the department, as well as for working with private sector stakeholders to develop innovative approaches to produce and deploy the best available technologies for homeland security missions. Moreover, we have previously identified key practices that can help sustain agency collaboration and concluded that collaborating agencies can look for opportunities to address resource needs by leveraging each others’ resources, thus obtaining additional benefits that would not be available if they were working separately. According to TSA officials, the agency recognizes the need to develop achievable milestones based on scientific evidence and is in the process of developing a roadmap for the entire passenger screening program. They explained that they plan to collaborate with DHS Science and Technology Directorate to determine milestones for the new roadmap that will be based on a scientific analysis of technology capabilities as well as ongoing research and development efforts. TSA officials stated that they plan to update the AIT roadmap using this new approach and expect the AIT roadmap to be completed by September 30, 2014. A group of experts moderated by GAO in June 2013 stated that DHS must have personnel with technical expertise in ATR software for AIT systems and development who are engaged throughout the developmental process to ensure that vendors are providing improved capabilities over time. According to these expert comments, it is important to leverage the technical expertise of academia and the national laboratories to improve capabilities over time and provide insight into reasonable time frames for meeting future tiers. In September 2011, we reported that given continuing budget pressures combined with the focus on performance envisioned in the Government Performance and Results Act (GPRA) Modernization Act of 2010, federal agencies must undertake fundamental reexaminations of their operations and programs to identify ways to operate more efficiently. While there are various approaches that vendors could take to make needed improvements to the technology, including hardware modifications, software developments, or incorporating new imaging techniques to provide enhanced capabilities, these approaches could take years to develop, and would require significant investment of resources. Moreover, according to scientists that we interviewed from the national laboratories, there are several ways to improve ATR software algorithms to enhance system capabilities; however, there is little market incentive for existing vendors to invest in making these improvements or for new vendors to enter the relatively small airport checkpoint market, since one vendor has already met TSA’s current requirements. Further, 2 of the 12 experts identified by the National Academy of Sciences with whom we spoke stated that establishing clear requirements would incentivize vendors to improve performance over time. Thus, according to these experts, it is unlikely that vendors will invest in making the needed improvements to meet TSA’s mission needs. According to a representative from the vendor of currently deployed AIT systems, moving from Tier II to Tier III presents new technological challenges because meeting additional tiers will require the development of more targeted algorithms. Accordingly, to develop these new algorithms, vendors would have to build new data sets, conduct research, and invest additional resources before accurately determining realistic time frames for meeting Tier III and Tier IV requirements. Therefore, given the current state of the technology as well as the amount of research that has to be conducted on developing algorithms that can meet Tier III and Tier IV requirements, neither TSA nor the AIT vendors can reliably predict how long it will take to meet Tier IV requirements. Because TSA revised its requirements over time, scientists from the national laboratories noted that vendors have little incentive to meet additional tier levels since they are meeting TSA’s current requirements. In addition, TSA has not obtained the necessary information to accurately understand the future state of the technology. Thus, the agency has little assurance that vendors will provide AIT-ATR systems that meet Tier IV requirements within TSA’s estimated time frames. As a result, the future capabilities of the technology and the time frames in which those capabilities will be delivered remain unknown. Given these challenges, TSA will be unable to ensure that its roadmap reflects the true capabilities of the next generation of AIT-2 systems without the use of scientific evidence and information from DHS’s Science and Technology Directorate, and the national laboratories, as well as nonproprietary information and data provided by vendors to develop a realistic schedule with achievable milestones that outlines the technological advancements, estimated time, and resources needed to achieve TSA’s Tier IV end state. TSA has deployed nearly 740 AIT systems and will spend an estimated $3.5 billion in life cycle costs on deployed AIT-ATR systems and future AIT-2 systems. However, TSA faces challenges in managing its AIT program because it is not using all available data that it collects to inform its decisions. For example, TSA does not enforce compliance with its operational directive that requires each airport to conduct IED checkpoint drills each week, nor does it collect or use IED checkpoint drill data on SO performance. Additionally, TSA is not analyzing available data on the number of secondary screening pat-downs that SOs conduct when the system indicates that it has detected an anomaly, which could provide insight into the number of false alarms that occur in the field and the extent to which these alarms affect operational costs. TSA could improve the overall performance of the AIT system and better inform its decision- making process related to checkpoint screening by clarifying which office is responsible for overseeing TSA’s operational directive, directing that office to enforce compliance with the directive, and analyzing the IED checkpoint data to identify any potential weaknesses in the airport screening process, and also establishing protocols that facilitate capturing operational data on passengers at the checkpoint to determine the extent to which AIT-ATR system false alarm rates affect operational costs. Although AIT systems and the associated software have been in development for over two decades, TSA has not used available information from the scientific community and vendors to understand the technological advancements that need to be made and determine the time frames in which AIT systems will meet Tier IV requirements. Therefore, the milestones that TSA uses to guide its procurement of this technology do not incorporate scientific evidence from the national laboratories or vendors that could be used to produce an accurate, realistic roadmap. TSA would have more assurance that its $3.5 billion investment in AIT provides effective security benefits by (1) measuring system effectiveness based on the performance of the AIT-2 technology and SOs who operate the technology, while taking into account current processes and deployment strategies and (2) using scientific evidence and information from DHS’s Science and Technology Directorate, and the national laboratories, as well as information and data provided by vendors, to develop a realistic schedule with achievable milestones that outlines the technological advancements, estimated time, and resources needed to achieve TSA’s Tier IV end state. To help ensure that TSA improves SO performance on AIT-ATR systems and uses resources effectively, the Administrator of the Transportation Security Administration should take the following two actions: clarify which office is responsible for overseeing TSA’s IED screening checkpoint drills operational directive, direct the office to ensure enforcement of the directive in conducting these drills, and analyze the data to identify any potential weaknesses in the screening process, and establish protocols that facilitate the capturing of operational data on secondary screening of passengers at the checkpoint to determine the extent to which AIT-ATR system false alarm rates affect operational costs once AIT-ATR systems are networked together. To help ensure that TSA invests in screening technology that meets mission needs, the Administrator of the Transportation Security Administration should ensure that the following two actions are taken before procuring AIT-2 systems: measure system effectiveness based on the performance of the AIT-2 technology and screening officers who operate the technology, while taking into account current processes and deployment strategies, and use scientific evidence and information from DHS’s Science and Technology Directorate, and the national laboratories, as well as information and data provided by vendors to develop a realistic schedule with achievable milestones that outlines the technological advancements, estimated time, and resources needed to achieve TSA’s Tier IV end state. We provided a draft of this report to DHS for comment. On March 21, 2014, DHS provided written comments, which are reprinted in appendix III and provided technical comments, which we incorporated as appropriate. DHS generally concurred with our four recommendations and described actions taken, underway, or planned, to implement each recommendation. Specifically, In response to the recommendation that TSA clarify which office is responsible for overseeing TSA’s Improvised Explosive Device Screening Checkpoint Drills operational directive, instruct the responsible office to enforce the directive, and analyze the drill data to identify any potential weaknesses in the screening process, DHS stated that TSA’s Office of Security Operations will initiate a review of programs that contribute to assessing screening performance with consideration of the findings identified in our report. TSA anticipates that it will complete this review by the end of fiscal year 2014, and by TSA also stated that by September 30, 2014, the operations directive will be amended to assign responsibility to one office. We believe that these are beneficial steps that would address our recommendation, provided that TSA directs the office to ensure enforcement of the directive in conducting the drills, and uses the data to identify any potential weaknesses in the screening process, as we recommended. In response to our recommendation that TSA establish protocols to help determine the extent to which AIT-ATR system false alarm rates affect operational costs once AIT-ATR systems are networked together, DHS stated that TSA will monitor, update, and report the results of its efforts to capture operational data on the secondary screening of passengers resulting from AIT-ATR false alarms and evaluate the associated impacts to operational costs based on existing staffing levels. Once implemented, the new reporting mechanism will address our recommendation, provided that it captures sufficient information to determine the extent to which AIT- ATR system false alarm rates affect operational costs. In response to the recommendation that TSA measure system effectiveness based on the performance of the AIT-2 technology and screening officers who operate the technology, while taking into account current processes and deployment strategies before procuring AIT-2 systems, DHS stated that TSA considers several factors when measuring system effectiveness, including documented deployment strategies, airport needs and conditions such as height and checkpoint space, TSA security operations processes and procedures, feedback from transportation security officers who operate the AIT-ATR systems, as well as concept of operations and formal operational and functional requirements documents. Further, DHS stated that TSA’s testing process enables TSA to determine if technologies meet required standards and are feasible for use in the airport environment, and that the system evaluation report for AIT-2— which will document system effectiveness using information from the laboratory and operational test reports—will state whether or not the next-generation AIT system has an acceptable operationally effective and suitable rating for use within an airport environment. While these are beneficial practices, we believe that it would be preferable for TSA to measure the AIT-2 system’s overall probability of detection by including an evaluation of screening officer performance at resolving alarms detected by the technology in its assessment, as we recommended, since AIT system effectiveness relies on both the technology’s capability to detect items and screening officers ability to correctly resolve alarms. In addition, DHS stated that TSA is currently implementing the Transportation Security Capability Analysis Process, which will be used to better understand TSA’s requirements and better articulate those requirements and needs for acquisition and requirements documentation. This is an important first step toward addressing our recommendation, provided that TSA uses this process to determine the overall effectiveness of its system based on the performance of the AIT-2 technology as well as the screening officers who operate the technology and not solely on the capabilities of current AIT technology as has been done in the past. In response to the recommendation that TSA use scientific evidence and information from DHS’s Science and Technology Directorate, and the national laboratories, as well as information and data provided by vendors to develop a realistic schedule with achievable milestones that outline the technological advancements, estimated time, and resources needed to achieve TSA’s Tier IV end state, DHS stated that TSA has initiated an effort to complete a more comprehensive technology roadmap that forecasts technology progression through detection tiers, estimates cost to mature the technology, and includes a timeline with supporting narrative. TSA expects this roadmap to be completed by September 30, 2014. We believe that these are beneficial actions that could help TSA address the weaknesses identified in this report and we will continue to work with TSA to monitor progress on the proposed steps as the agency progresses. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the TSA Administrator, the House Homeland Security Committee, the House Subcommittee on Oversight and Management Efficiency, the House Subcommittee on Transportation Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In January 2012, we concluded that TSA had acquired advanced imaging technology (AIT) systems that were not being used on a regular basis and thus were not providing a security benefit. For example, we found that 32 of 486 AIT systems had been used less than 5 percent of the days since their deployment, and that 112 of 486 AIT systems had been used on less than 30 percent of the days since their deployment. Further, we observed that at 5 of the 12 airports we visited, AIT systems were deployed but were not regularly used. For example, at 1 airport we observed that TSA had deployed 3 AIT systems in an area that typically handles approximately 230 passengers. TSA officials informed us at the time that 2 of the AIT systems were seldom used because of the lack of passengers and mentioned that they believed the AIT systems were deployed based on the availability of space. In addition, we observed instances in which AIT systems were not being used because of maintenance problems that affected how often the deployed AIT system screened passengers. We concluded, on the basis of our observations on AIT utilization, that there were concerns about how effectively deployed AIT systems were being used. Accordingly, we recommended that TSA evaluate the utilization of currently deployed AIT systems and potentially redeploy AIT systems based on utilization data, so that those systems not being extensively used could provide enhanced security benefits at airports. The Department of Homeland Security (DHS) agreed, and TSA has taken steps to address our recommendation but has not fully addressed the intent of our recommendation. Specifically, TSA took the following actions. Develop and track AIT utilization metrics. TSA officials we spoke with in October 2012 stated that they revised TSA’s metric for measuring utilization based on our January 2012 report to more accurately reflect the amount of time AIT systems were being used. According to TSA’s field guide issued in March 2012, TSA measures AIT utilization as the percentage of passengers that are screened by AIT systems. To track AIT utilization based on this metric, TSA developed specific targets to meet that are based on passenger throughput and hours that AIT systems are in operation at an airport. However, the target TSA establishes for an airport is reduced to account for AIT systems that are not operational because of maintenance problems or that are not being used because of lane closures, staffing restrictions, or low passenger volume. Accordingly, the methodology employed by TSA to measure AIT utilization does not accurately measure the extent to which AIT systems are being used since the metric tracks AIT system utilization only when they are being used. Furthermore, to calculate airport targets and track AIT utilization, TSA relies on data submitted by airports into its centralized information management system. However, in September 2013, the DHS Office of Inspector General (DHS OIG) reported that TSA did not have adequate internal controls to ensure accurate data on AIT utilization. Specifically, the OIG found that TSA’s utilization data were unreliable because (1) AIT throughput data recorded in its centralized information management system were different from data in the source document, (2) AIT throughput data on the source document were not recorded in its centralized information management system, (3) the starting AIT count was different from the previous day’s ending AIT count, and (4) AIT throughput source documentation was missing. Further, since airports record and enter AIT throughput in its centralized information management system manually, this may lead to inaccurate recording of information and does not provide an audit trail to validate data accuracy. Accordingly, without reliable throughput data, TSA decision makers cannot accurately measure AIT utilization at airports. Backscatter X-ray technology uses a low-level X-ray to produce an X-ray image, while millimeter-wave technology beams the millimeter-wave radio-frequency energy over the body’s surface to produce a three-dimensional image. Since the backscatter vendor was unable to develop Automated Target Recognition (ATR) software by the June 2013 statutory deadline, as extended by TSA, to upgrade all deployed AIT systems with the software, TSA terminated its contract with this vendor and removed all of these systems from airports in order to meet the requirement. number of AIT systems that should be deployed to which airports. Accordingly, TSA is not using the data it collects on utilization to inform its deployment decisions. While the actions TSA has taken represent important steps toward addressing our recommendation, ensuring that the utilization data it collects are accurate, and using these data to inform future deployment decisions, would help ensure the effective utilization and redistribution of AIT systems and efficient use of taxpayer resources. This report answers the following questions: 1. To what extent does TSA collect and analyze available information that could be used to enhance the performance of AIT systems equipped with ATR (AIT-ATR)? 2. To what extent has TSA made progress toward enhancing AIT capabilities to detect concealed explosives and other threat items, and what challenges, if any, remain? To determine the extent to which TSA collects and analyzes available information to improve the performance of screening officers (SO) responsible for resolving anomalies identified by ATR software, we analyzed improvised explosive device (IED) checkpoint drills conducted by TSA personnel at airports that submitted data to TSA from March 1, 2011, through February 28, 2013, under TSA’s IED checkpoint drill operational directive. TSA’s IED checkpoint drill operational directive requires personnel at airports to conduct drills to assess Transportation Security Officer (TSO) compliance with TSA’s screening standard operating procedures (SOP) and to train TSOs to better resolve anomalies identified by AIT-ATR systems. We analyzed those data to determine whether airports were in compliance with TSA’s operational directive by analyzing the number and percentage of tests that were conducted on AIT systems and on other passenger screening methods at the checkpoint to evaluate whether, overall, airports with AIT systems had conducted the required proportion of drills between AIT drills and other passenger-screening drills. Additionally, we evaluated airport compliance with TSA’s operational directive and Standards for Internal Control in the Federal Government to determine the extent to which TSA is monitoring We also reviewed TSA’s AIT deployment compliance with its directive.schedules to determine which type of AIT-ATR system airports had, the dates those systems were first deployed, and the dates systems were upgraded with ATR capability to assess how airport performance varied at resolving anomalies identified by the AIT-ATR system. Further, we analyzed laboratory test results of the AIT-ATR system and the AIT systems that used IOs (AIT-IO) from calendar years 2009 through 2012 conducted by the Transportation Security Laboratory (TSL). We analyzed these data using statistical methods that estimated how the false alarm rates varied according to various characteristics of the mock passenger. We assessed whether the laboratory tests complied with statistical principles by comparing the testing design to generally accepted statistical principles used for data collection. We calculated the false alarm rates using two specific statistical calculations, called bias- corrected cluster bootstrap resampling and random effects methods, to estimate the sampling error of the AIT-ATR systems’ estimated false alarm rates. We used each of these methods to estimate the 95 percent confidence intervals of the false alarm rates, and achieved similar results using either method. GAO, Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs, GAO-12-833 (Washington, D.C.: Sept. 18, 2012). We identified key acquisition management practices by reviewing 17 prior GAO reports examining DHS, the Department of Defense, the National Aeronautics and Space Administration, and private sector organizations. and reviewing testing reports and related documentation. We determined these data were sufficiently reliable for the purposes of this report. Furthermore, we compared the extent to which TSA evaluated the performance of the entire system to key acquisition practices established by GAO, DHS’s Acquisition Directive 102-01, and TSL’s Test Management Plan. We also visited a nonprobability sample of four U.S. airports to observe AIT-ATR systems and interview relevant TSA personnel. We interviewed a total of 46 TSA personnel who operate AIT- ATR systems selected by airport officials to obtain their views on system performance, and six Transportation Security Specialists for Explosives to discuss airport IED checkpoint drills. We selected these airports based on airport category and AIT-ATR system deployment. The information we obtained from these visits cannot be generalized to other airports, but provided us with information on the perspectives of various participants in the deployment of AIT units at airports across the country. We also interviewed TSA officials involved in AIT-ATR deployment, training, and covert testing. We visited TSL in Atlantic City, New Jersey, to interview laboratory scientists responsible for testing and evaluating AIT-ATR systems and reviewed TSL documentation related to laboratory test plans, records, and final reports. We interviewed knowledgeable agency officials from TSA, TSL, and DHS’s Science and Technology Directorate to better understand how AIT-ATR and AIT-IO system performance was assessed. To determine progress TSA has made and any challenges that remain toward enhancing AIT capabilities, we analyzed TSA’s original AIT roadmap dated February 2012, as well as the October 2012 revision. To determine the extent to which TSA has met its projected time frames for AIT-ATR system upgrades and development of the next generation of AIT systems, referred to as AIT-2, we reviewed actions taken by TSA testing officials and compared the actual dates for each milestone with the estimated dates documented in TSA’s AIT roadmap. We also reviewed a leading AIT vendor’s technology plan for meeting additional tiers to determine the extent to which TSA’s AIT roadmap contained achievable time frames for meeting future tier levels. We further reviewed several technology roadmaps for large-scale acquisition programs developed by other agencies and organizations, such as the Department of Defense, as well as technology roadmapping guidance developed by Sandia National Laboratories to enhance our understanding of the fundamental elements of technology roadmaps. We then compared this guidance with TSA’s AIT roadmap to determine the extent to which TSA’s roadmap contained these elements. We also reviewed prior GAO reports on (1) major acquisition programs to identify best practices for delivering capabilities within schedule and cost estimates and (2) key practices that can help sustain agency collaboration to leverage each others’ resources and obtain additional benefits that would not be available if they were working separately. To determine challenges TSA faces toward enhancing AIT capabilities, we interviewed scientists from the Department of Energy’s Sandia National Laboratories and Pacific Northwest National Laboratory to obtain their views on current and future capabilities of the technology and the scientific advancements that would need to occur to enable the development of future tier levels. We also interviewed a leading AIT vendor to obtain its views on the extent to which TSA obtained input from the vendor related to its ability to meet future tiers within expected time frames as well as the risks and limitations associated with pursuing alternative approaches for developing successive tiers. We further interviewed TSA acquisition officials to obtain the agency’s views on the vendors’ ability to meet future tiers within estimated time frames. Last, we interviewed 12 experts identified by the National Academy of Sciences to obtain their views on best practices for testing detection technologies, such as AIT-ATR systems. Our interviews with these experts are illustrative and provide insights about testing best practices. We conducted this performance audit from September 2012 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Stephen M. Lord at (202) 512-4379 or at [email protected]. In addition to the contact named above, David Bruno, Assistant Director; David Alexander; Carl Barden; Carissa Bryant; Susan Czachor; Emily Gunn; Tom Lombardi; Lara Miklozek; Tim Persons; Doug Sloane; and Jeff Tessin made key contributions to this report.
TSA accelerated the deployment of AIT systems, or full-body scanners, in response to the December 25, 2009, attempted terrorist attack on Northwest Airlines Flight 253. Pursuant to the Federal Aviation Administration Modernization and Reform Act of 2012, TSA was mandated to ensure that AIT systems were equipped with ATR software, which displays generic outlines of passengers rather than actual images, by June 1, 2013. All deployed AIT systems were equipped with ATR software by the deadline. GAO was asked to evaluate TSA's AIT-ATR systems' effectiveness. This report addresses the extent to which (1) TSA collects and analyzes available information that could be used to enhance the effectiveness of the AIT-ATR system and (2) TSA has made progress toward enhancing AIT capabilities to detect concealed explosives and other threat items, and any challenges that remain. GAO analyzed testing results conducted by the Transportation Security Laboratory and TSA personnel at airports and interviewed DHS and TSA officials. This is a public version of a classified report that GAO issued in December 2013. Information DHS and TSA deemed classified or sensitive has been omitted, including information and recommendations related to improving AIT capabilities. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) does not collect or analyze available information that could be used to enhance the effectiveness of the advanced imaging technology (AIT) with automated target recognition (ATR) system. Specifically, TSA does not collect or analyze available data on drills using improvised explosive devices (IED) at the checkpoint that could provide insight into how well screening officers (SO) resolve anomalies, including objects that could pose a threat to an aircraft, identified by AIT systems, because it does not enforce compliance with its operational directive. TSA's operational directive requires personnel at airports to conduct drills to assess SO compliance with TSA's screening standard operating procedures and to train SOs to better resolve anomalies identified by AIT-ATR systems. GAO found that TSA personnel at about half of airports with AIT systems did not report any IED checkpoint drill results on those systems from March 2011 through February 2013. According to TSA, it does not ensure compliance with the directive at every airport because it is unclear which office should oversee enforcing the directive. Without data on IED checkpoint drills, TSA lacks insight into how well SOs resolve anomalies detected by AIT systems, information that could be used to help strengthen existing screening processes. Potential weaknesses in the screening process could be caused by TSA not clarifying which office is responsible for overseeing TSA's operational directive, directing that office to ensure enforcement of the directive in conducting these drills, and analyzing the data. Further, when determining AIT-ATR system effectiveness, TSA uses laboratory test results that do not reflect the combined performance of the technology, the personnel who operate it, and the process that governs AIT-related security operations. TSA officials agreed that it is important to analyze performance by including an evaluation of the technology, operators, and processes and stated that TSA is planning to assess the performance of all layers of security. By not measuring system effectiveness based on the performance of the technology and SOs who operate the technology or taking into account current processes and deployment strategies, DHS and TSA are not ensuring that future procurements meet mission needs. TSA completed the installation of ATR software upgrades intended to address privacy concerns for all deployed AIT systems; however, it has not met proposed milestones for enhancing capabilities as documented in its AIT roadmap—a document that contains milestones for achieving enhanced capabilities to meet the agency's mission needs. For example, TSA began operational test and evaluation for Tier II upgrades 17 months after the expected start date. Moreover, TSA did not use available scientific research or information from experts from the national laboratories or vendors on the technological challenges that it faces in developing requirements and milestones, because, according to TSA, it relied on time frames proposed by vendors. Thus, TSA cannot ensure that its roadmap reflects the true capabilities of the next generation of AIT systems by using scientific evidence and information from DHS's Science and Technology Directorate, the national laboratories, and vendors to develop a realistic schedule with achievable milestones that outlines the technological advancements, estimated time, and resources needed to achieve enhanced capabilities as outlined in TSA's roadmap. GAO recommends that TSA, among other things, clarify which office should oversee its operational directive, better measure system effectiveness, and develop a realistic schedule before procuring future generations. TSA concurred with GAO's recommendations.
For children who have been abused or neglected by their caregivers, the child welfare system is an expensive and often poor substitute for a permanent home. The children and families in the system often have serious and difficult problems that can require intensive and time-consuming services. Unfortunately, most experts agree that the current system for caring for these children is inadequate. Program officials and policymakers alike are eager to find new and better solutions to meet the growing demands of this vulnerable group. Though not without controversy, managed care has emerged as one strategy to improve the overall care system for families in the child welfare system. The child welfare system is a complicated network of policies and programs designed to protect and promote the safety and well-being of children. Encompassing a broad range of activities, child welfare services include those designed to protect abused or neglected children, support and preserve families, care for the homeless and neglected, support family development, and provide out-of-home care when children must be removed from their families. The Adoption Assistance and Child Welfare Act of 1980 (P.L. 96-272) established requirements that states undertake reasonable efforts to prevent the need to remove abused and neglected children from their families. If separation is required, however, states must ensure adequate care for the children while providing the necessary services to help reunite the family or locate another permanent home for the child if reunification is inappropriate. The child welfare system has been under great pressure to meet increased demands. Over the last decade, rising caseloads have dramatically increased federal, state, and local spending for child welfare services. The needs of children and their families in the system are greater and more complex than ever before. Yet, the current system for funding and serving this more difficult population is fragmented and has strained public agencies’ ability to adequately meet service needs. The child welfare system consists of a myriad of agencies and programs that intervene when children are neglected or abused. State or county child welfare agencies carry the responsibility for ensuring the safety of these children through multiple programs funded by local, state, and federal governments. Children generally come to the attention of the child welfare system when someone—a physician, child care provider, or teacher, for example—reports to the state or local child welfare agency an allegation of abuse or neglect. Child protective services (CPS) workers respond to and investigate these reports, identify services such as parenting classes for the family, and determine whether to remove a child from the family’s home. If removal is warranted, the child is placed in any one of several foster care settings that offer different levels of care depending on the needs of the child. The lowest to highest levels of care—corresponding to the least to most costly foster care settings—include foster family home, therapeutic family foster home, group home, and residential treatment center. A child’s stay in foster care is considered temporary, until the family can be reunited, the child is adopted, or some other permanent living arrangement is made. In addition, states provide family preservation and support services to prevent out-of-home placement and help reunite families. These services include family counseling, respite care for parents and caregivers, and services to improve parenting skills and support child development. Primary responsibility for child welfare services rests with the states, and each has its own legal and administrative structures and programs to address the needs of children. In most states, the state child welfare agency makes major administrative decisions; in 12 states, however, counties administer child welfare programs with considerable autonomy to establish policies and priorities within broad state guidelines. While public workers provide some or all child welfare services in some locations, most state and county child welfare agencies have long relied on private service providers—predominantly nonprofit, community-based agencies—to work directly with children and families. For children entering the child welfare system, public caseworkers are typically responsible for (1) developing a service plan for the child that can identify out-of-home care, educational, and health service needs; (2) directly providing or arranging to purchase from private or other public providers the specified services; and (3) periodically monitoring the child’s progress. During the mid-1980s through the mid-1990s, the child welfare system witnessed dramatic increases in the number of children reported abused or neglected and placed in foster care. An estimated 3 million children were reported possible victims of abuse or neglect in 1996. Upon investigation, CPS workers confirmed maltreatment of almost a third of these reported children. During this same period, the foster care population grew almost 80 percent, from 280,000 children in 1986, to 502,000 in 1996. Child welfare experts attribute the rise in the foster care population to such trends as the increasing use of illegal drugs, especially among young mothers in inner-city areas; rising numbers of homeless families; and the growing number of children whose families live in poverty. With increasing foster care caseloads, expenditures for the basic needs of foster children and program administration have risen dramatically. From 1986 to 1996, federal costs for child welfare services increased nearly fivefold to $4.2 billion. The Congressional Budget Office estimates that this will rise to $5.9 billion by 2002. In addition to facing similar increases in these expenditures, state and local resources have been constrained by competing demands from other activities. States have found it increasingly difficult to maintain sufficient funding levels to ensure that service needs are met. To address these financial pressures, many states have cut child and family services or kept budgets constant. Further, resource constraints force public child welfare workers to prioritize their caseloads, which often means responding to emergency situations, leaving little time to attend to children already in out-of-home care. No single federal program fully supports the range of services that typically make up state and local child welfare programs. The major federal programs are found in the Social Security Act: Title IV-E Foster Care is an uncapped entitlement that reimburses states for a portion of foster care costs, such as food and shelter, daily supervision, administration, and training for agency staff, for only those children eligible under the Aid to Families With Dependent Children (AFDC) program. Title IV-E Adoption Assistance, also an uncapped entitlement, reimburses states for a portion of adoption costs, including payments to parents who adopt children with special needs as well as administrative and training costs, for only those children eligible under the Supplemental Security Income (SSI) or AFDC programs. Title IV-E Independent Living reimburses states for some of the cost of providing independent living services for older foster children. Title IV-B Child Welfare Services and Promoting Safe and Stable Families programs provide federal matching grants to states for up to 75 percent of the costs of services such as family preservation and support services, some foster care, and other child welfare services related to preventing out-of-home placements; reuniting families; finding adoptive families; protecting children’s safety; and preventing maltreatment. Title XX Social Services Block Grant gives states discretion to fund a wide array of social services for children, families, adults, and the elderly. Federal, state, and local governments share responsibility for funding the child welfare system. In fiscal year 1996, over 65 percent of the $4.7 billion in federal funding for child welfare services under titles IV and XX was for foster care services. Title IV funding of states’ child welfare program costs ranged from 50 to 77 percent. In addition, where federal sources are fixed by annual appropriations, as in title IV-B, states fund both the required match and any additional costs above the capped amount. Furthermore, through cost-sharing arrangements with counties, states pay all foster care costs for those children ineligible under the federal program. Nationwide, about half of all foster care placements were funded under title IV-E in fiscal year 1996. The child welfare service system is also fragmented. The service needs of children and families known to the child welfare system are more complex than in the past. Facing multiple problems of economic hardship, substance abuse, homelessness, and mental or physical illness, these children and their families more often have serious emotional, behavioral, and medical needs. However, rarely does a single state or local agency have control over the full array of services to appropriately address the needs of the child welfare population. Many of the needed services, such as mental health care and drug treatment, are outside the control of the child welfare system. Public child welfare agencies often must tap into a complex set of human service systems, which are usually supported by separate categorical funding sources and have different eligibility criteria. Gaining access to needed services, especially those outside the child welfare system, can be extremely difficult when other systems also have insufficient capacity or do not share child welfare’s priorities. Finally, service providers are rarely held accountable for achieving system objectives to improve the quality of care and service outcomes to ensure children do not languish in foster care. Public agencies traditionally use private service providers who are paid on a fee-for-service basis whereby a provider submits a bill and is reimbursed for the number and type of services delivered. Under this system, the provider has little incentive to, when appropriate, reduce the level of care as children’s functioning improves, discharge them from care, or monitor and assess the number of service units provided because these actions may result in lost revenues. The origins of managed care lie in the health sector. Prepaid health care plans were first developed to improve access to and continuity of health care while controlling costs. Early health maintenance organizations (HMO) served primarily an employed population. Contracting with prepaid, fixed-fee managed care plans to deliver health care services to Medicaid beneficiaries first became an option for states in the 1960s. As federal and state Medicaid expenditures soared, states increasingly turned to managed care programs to help bring costs under control and expand access to health care for low-income families. By 1997, states had extended prepaid managed care to more than 48 percent of the Medicaid population. Prepaid managed care plans have two fundamental elements—a prospective capitated payment system and coordinated services. In general terms, states pay contracted plans, such as an HMO, a monthly or capitated fee per enrollee to provide a range of medical services that are coordinated through primary care physicians and typically delivered by an established network of affiliated hospitals, physicians, and other providers. With its fixed prospective payment, this model attempts to create an incentive for plans to provide preventive and primary care and to ensure that only necessary medical services are provided. The second managed care element brings together an array of different services to ensure that an enrollee has access to needed care by linking individual beneficiaries with a single provider responsible for coordinating their health care needs. A capitated payment is a prospective rate paid for a range of services for a specified population. The methods for developing the capitated payment generally involve bundling rates by aggregating costs for a related set of services and paying a single average rate on a fixed-fee basis. Separate rates may be established for specific populations, such as individuals with chronic illnesses or disabilities. The fact that the price is fixed exposes the managed care plan to some financial risk. Plans have an incentive to control expenses to avoid losses but always face a risk that the needs of certain patients may result in unexpectedly high costs. Two types of fixed payment arrangements, in particular, illustrate how risk is shifted to providers. The first type, a case rate payment, transfers to the managed care plan the risk that patients’ service level, duration, and cost will exceed projections. Under this arrangement, the health plan’s payment rate is fixed to cover all expected costs incurred for a specified patient. Although each client generates a new payment, providers have an incentive to reduce the duration of treatment and avoid serving patients whose treatment will be costly or lengthy. The second type, a capitated rate, similarly shifts financial risk when service use is higher than anticipated but also includes a factor of uncertainty because the number of clients that will actually require services is unknown. A capitated rate is a single, previously negotiated, monthly or periodic fee paid for all members of a pool of potential service users, whether or not an enrollee uses any services. The plan is expected to respond to whatever level of service is needed by the enrollees, as long as the service falls within contractual terms. In a managed health care plan, the primary care provider is responsible for delivering or arranging for the delivery of all health services required by the covered person under the conditions of the provider contract. A primary care physician is typically responsible for approving and monitoring the provision of all services covered by a health plan to the patient and family. As the case manager, this primary care physician acts as the gatekeeper or single point of entry for patient access to health care services. To simplify access to a continuum of services and ensure coordination of care, the plan may incorporate a broad range of general and specialty services within a network or organization of affiliated providers. In addition, the plan may provide services itself or authorize out-of-network services. To discourage and reduce unnecessary procedures or inappropriate service use, a patient may be required to obtain prior approval, or preauthorization for payment, before admission to inpatient facilities, emergency rooms, or other high-cost or high-risk services. Furthermore, requiring approval for appropriate and necessary care as a condition of payment authorization reduces excessively prolonged or unnecessarily expensive treatment levels. Costs can also be reduced when patients are diverted from unnecessary or overly expensive levels of care into suitable, less costly alternatives. Public agencies are beginning to adopt these managed care elements in their child welfare systems. According to child welfare experts, policymakers and child welfare administrators are attracted by the twin promises of managed care—cost containment and improved service access and quality—and hope that managed care can improve shortcomings in the current child welfare system. Currently, for example, a public child welfare caseworker may have difficulty accessing different services, such as group or individual counseling and parent training, from a number of separate providers. This fragmentation can lead to delays in receiving needed services and prolonged stays in foster care, and offers little assurance that children and families are getting services that best match their needs. In addition, provider payments are typically made on a fee-for-service basis with little incentive for the provider to reduce the child’s type and amount of services or recommend discharge from care when appropriate. Under managed care, a single entity is responsible for arranging and coordinating the child’s care among a network of providers and is reimbursed on a capitated basis rather than for the total amount of services provided. Similar to the pressures the Medicaid program faced, rapidly increasing child welfare expenditures and eligible populations have strained state and local budgets, while providers and policymakers have experienced difficulties in meeting their fiscal or program goals amid calls for system reform. As we have previously reported, the current federal system for financing child welfare services makes it easier for states to place children in foster care rather than provide services to avert the need for foster care because federal funding under the open-ended entitlement program of title IV-E is available only when a child is in out-of-home care. Federal funding under titles IV-B and XX is capped for in-home services, such as those to prevent the need to reenter foster care, and has not kept pace with growing demands. The implementation of Medicaid managed care—which serves many of the same vulnerable populations that child welfare agencies serve—has also created interest in managed care in child welfare. Child welfare clients may already be required to access health care or behavioral healthservices through a managed care system. As of fiscal year 1997, more than 48 percent of Medicaid-eligible clients, including some in the child welfare system, were enrolled in managed health care or behavioral health programs. Child welfare experts point to several other factors that have coalesced to heighten interest in managed care. First, child welfare providers are responding to and adapting their practices to new managed care environments in other human service systems. Private child welfare service providers often serve children, youth, and families from various service systems, including child welfare, mental health, and juvenile justice. In an attempt to diversify their funding streams, some of these providers have pursued the commercial business of behavioral health managed care organizations and marketed themselves as a less expensive alternative to inpatient care. These efforts enable the providers to capture new private clients, learn managed care skills, and advocate privatizing the child welfare service-delivery system. Second, states are seeking the opportunity to allocate money more efficiently by using more appropriate and effective services. As many states have addressed financial pressures by cutting children and family services or keeping budgets constant in the face of increasing demand, managed care is seen as well suited to downsizing and cost containment. Finally, behavioral health managed care organizations are seeking new markets in the child welfare system. In chapters 2 and 3, we describe state and local efforts to implement managed care in the child welfare system. In contrast with health care, the child welfare system is uniquely affected by several factors, including the characteristics of the clientele, the role of the judiciary, legal and policy goals and responsibilities, and service delivery complexities. One significant difference between child welfare and health care is that most child welfare services are delivered on an involuntary basis. Child welfare services are most often imposed on unwilling clients at the direction of the courts, police, or a CPS worker, after concerns are raised about their parenting abilities. In the health care system, the patient wants the protection of a health insurance plan and seeks out services when the need arises. In child welfare, families are often resistant or hostile to system intervention. In these situations, according to child welfare experts, it may not be appropriate to assume that a limited number of visits or treatments will resolve long-standing family issues that have led to child abuse or neglect. A second difference between the child welfare and health care systems is that courts play a key role in child welfare decision-making, and this could limit providers’ control over costs. In health care, the payer is also the arbiter of what services are delivered and paid for, guided by contractual and regulatory guidelines. In child welfare, however, many cases— including all out-of-home care—are under the jurisdiction of the state’s family or juvenile court. State law determines the full extent of the court’s authority. In some states, neither the public child welfare worker nor a service provider ultimately controls a child’s treatment plan because the court has the authority to order specific services and placement at individual facilities, or the court may order different services from those recommended by child welfare professionals. Moreover, courts may be backlogged or disagree with recommendations about children’s movement within foster care or discharge from care; in either case, children may remain in or at a level of care longer than clinically necessary. A third factor unique to the child welfare system is the scope of care provided by public agencies. Public agencies do not look merely at a child’s clinical needs but also at the child’s other needs, such as safety, protection, and social supports. In addition, the overall well-being of the child’s family is a concern of public agencies. In many cases, the child’s primary need may be the services provided to the parents, such as substance abuse treatment or parenting classes. Recalcitrant, hostile, or uncooperative parents may prolong the intervention required for a seemingly simple problem. In addition, if a child cannot be reunited with the family, then the child welfare agency assumes the difficult task of finding an alternative permanent home for that child that will offer appropriate supervision and guidance until the child reaches adulthood. Last, the links between diagnoses, interventions, and outcomes are less clear for clients in the child welfare system. In health care, one can generally predict with reasonable confidence the incidence of particular ailments for a given population, costs of care, and probable outcomes. This predictability allows a managed care provider to anticipate service demand and, therefore, costs. In the child welfare system, however, predicting needs and outcomes is much less certain and depends more on social factors, which are less predictable than physical ones. Moreover, social services are much less standardized than health services and are often delivered very differently from community to community and from family to family, according to factors unrelated to the family’s situation. Federal involvement in managed care in child welfare has thus far been limited. While the federal role in states’ managed health care efforts has expanded recently, no comparable role has emerged for managed care in child welfare. The Administration for Children and Families (ACF) within the Department of Health and Human Services (HHS) administers the federal child welfare programs; this involves monitoring states’ compliance with federal statutes and regulations, providing technical assistance to states, funding various resource centers across the country, and supporting research and evaluation efforts. To date, ACF has included managed care topics in some of its conferences and responded to specific inquiries about managed care financing in child welfare but has so far not provided any formal guidance or technical assistance. In addition, HHS was given the authority in 1994 to establish no more than 10 child welfare demonstrations that waive certain restrictions in title IV-E and allow broader use of federal foster care funds. Although a waiver could facilitate managed care, the purpose for granting waivers is to test a variety of innovations including but not limited to managed care. The Adoption and Safe Families Act of 1997 (P.L. 105-89) expanded HHS’ authority to approve up to 10 states’ waiver demonstrations in each of the 5 fiscal years 1998 through 2002. The Chairman of the Subcommittee on Human Resources of the House Committee on Ways and Means asked us to review states’ efforts to implement managed care arrangements into their child welfare systems. This report describes (1) the extent to which public agencies are using managed care to provide child welfare services, (2) the financial and service delivery arrangements under managed care that are being applied to the child welfare system, and (3) challenges child welfare agencies face as they develop and implement managed care, and the results of such efforts to date. To determine the number of managed care initiatives in the child welfare system and how they are being implemented, we surveyed all 50 states, the District of Columbia, and selected localities. To obtain more detailed information about managed care arrangements as well as implementation challenges and results, we studied ongoing managed care efforts at four locations—Kansas, Massachusetts, Boulder County in Colorado, and Sarasota County in Florida—where different managed care models of varying scope are being implemented. We also interviewed public agency officials in states and localities that are implementing managed care initiatives and reviewed available documentation about individual initiatives. In addition, we reviewed relevant literature, and interviewed experts about managed care in both health care and child welfare as well as representatives from national and state child advocacy organizations. To learn about the federal government’s involvement in state efforts to implement managed care, we spoke with officials at the Substance Abuse and Mental Health Services Administration (SAMHSA) and ACF. The purpose of the survey was twofold. First, we wanted to determine the total number of states and localities operating or considering managed care projects or initiatives nationwide and, for those projects in operation, obtain a description of the types of managed care arrangements being used. Second, because some locations have more than one ongoing managed care initiative, we wanted to obtain detailed program information on the initiative that is serving the most children. In February 1998, we mailed a copy of the survey to the child welfare director in each of the 50 states and the District of Columbia. We also mailed the survey to child welfare officials in 43 localities we had identified as possibly implementing or considering managed care initiatives. We learned of those officials by telephoning the state child welfare directors in the 13 county-administered states and asking them to identify the applicable localities in their states. We received responses from 48 state agencies, the District of Columbia, and all the local agencies. On the basis of the returned surveys and telephone contacts with several local agency officials, we excluded from our analyses 17 counties and 1 district that either (1) were implementing a multiple-county initiative and chose to designate one county as the survey respondent, or (2) were neither implementing nor considering managed care arrangements in their child welfare systems at the time of our survey. Hence, for our adjusted population size of 76 state and local agencies, the 74 valid responses resulted in an overall response rate of 97 percent. From these responses, we obtained general information on 36 initiatives and more detailed information on the 27 initiatives serving the most children in each location. We did not verify the information obtained through the survey. However, we conducted telephone interviews with state and local respondents to clarify responses, as needed, and obtained additional information about program and population coverage and available descriptive documentation. In addition, we obtained more detailed information about ongoing managed care efforts at four locations. We selected four locations—Kansas, Massachusetts, Boulder County in Colorado, and Sarasota County in Florida—to obtain more detailed information about the events and activities surrounding the locations’ decision to implement managed care in their child welfare systems, the process of planning and designing the initiative, the rationale behind changing service delivery and financial arrangements, contracting and subcontracting processes and management, monitoring and accountability activities, results to date, implementation challenges, and lessons learned. On the basis of the relevant literature and in consultation with child welfare experts, we selected these four locations because they were implementing different managed care models, had different circumstances leading to their initiative’s design, and provided examples of both state- and local-level efforts. At each location, we interviewed officials from the state and local child welfare agencies as well as representatives from the primary managed care contractors and subcontractors, and reviewed pertinent documentation. Furthermore, we coordinated our site selection and data collection efforts with researchers at the University of Chicago’s Chapin Hall Center for Children, who were conducting case studies to describe how managed care is being implemented in Kansas and three other locations—Tennessee, Hamilton County in Ohio, and Lake and Sumter Counties in Florida. Where appropriate, information from Chapin Hall’s four case studies is incorporated in this report. We conducted our work between August 1997 and August 1998, in accordance with generally accepted government auditing standards. Nationwide, public child welfare agencies are implementing, planning, or considering managed care initiatives in 35 states. Most ongoing initiatives are serving foster care children with the most complex and costly service needs. In total, however, only a small portion of the nation’s child welfare population is covered by managed care arrangements. In general, public child welfare agencies are entering into managed care arrangements with nonprofit, community-based providers who have historically served as providers in the agencies’ foster care or out-of-home placement systems. These new arrangements, however, have significantly changed the roles and responsibilities of the public and private entities. Finally, for-profit managed care companies have not had a major role in implementing managed care in child welfare, with only a few locations using these organizations to manage the delivery of child welfare services. Appendix I lists the 27 managed care initiatives about which we obtained detailed data, including their implementation date, geographic scope, size, organizational arrangement, and description of the populations and child welfare programs covered. Interest in child welfare managed care is growing, as public agencies launch new efforts or consider developing initiatives in their states. However, to date these efforts tend to be small in scale, serving targeted populations of children and their families in a limited number of locations. Public agencies’ exploration of managed care in child welfare is a new phenomenon. Nationally, most managed care initiatives have been operating for 2 years or less. As of March 1998, 36 initiatives were under way in 13 states and had been operating for an average of 20 months. By the end of 1998, managed care initiatives will be operating in four additional states. These numbers will continue to rise in the near future as managed care efforts are being planned or considered in another 18 states. (See table 2.1.) The majority of the managed care initiatives have been implemented by a county or district-level child welfare agency. Of the 36 ongoing initiatives, 23 were established by local agencies. Two counties—Mesa County, Colorado, and Hamilton County, Ohio—have implemented multiple efforts. In addition, eight state agencies have implemented managed care in child welfare, including statewide efforts in four states—Georgia, Kansas, Massachusetts, and Tennessee. Believing that managed care arrangements can contribute to better services and control costs, public child welfare agencies have targeted the most expensive and programmatically difficult populations to serve in their initiatives. Of the 27 largest initiatives about which we obtained detailed information, 25 include the hard-to-serve and most costly children. This population often consists of severely emotionally disturbed children—mostly adolescents—needing mental health services, who are either in or at risk of group or residential foster care placement. About half of the 25 initiatives serve this group exclusively. For example, Indiana’s The Dawn Project targets children aged 5 to 17 who reside in Marion County and (1) are at risk of separation or are already separated from their families and living in a residential treatment center; (2) have been involved with two or more human service systems, such as child welfare, juvenile justice, special education, and mental health; and (3) have had an impairment for more than 6 months. In contrast, some jurisdictions serve all eligible populations—these include one statewide and five countywide efforts. For example, Kansas’ entire foster care, adoption, and family preservation program populations are being served through regional contractors across the state; and in Jefferson County, Colorado, the managed care initiative serves all children and families in the county’s child welfare system. While nearly all 27 initiatives cover children in foster care, some initiatives also include or target children in other child welfare programs, such as adoption and family preservation and support services (see app. I). With few statewide efforts and most managed care activity occurring at the local level, only a small segment of the child welfare population is currently being served under managed care. In the 13 states where initiatives are in place, almost 44,000 children are being served under child welfare managed care arrangements. This represents about 8 percent of the child welfare population in those states, including both in-home and out-of-home care clients, and 4 percent of the nearly 1 million children in the child welfare system nationwide. By the end of 1998, when managed care initiatives are expected to be under way in four more states, the nationwide proportion is likely to increase to about 6 percent. The number of children served under managed care in each of the geographic areas covered by an initiative—whether a state- or county-established effort—varies greatly. Numbers range from fewer than 10 children in Oneida County, New York, to as many as 23,200 children in Illinois, with an average of about 4,300 children in state-level efforts and 500 in counties and districts. Many of these states and localities plan to expand the size of their initiatives by increasing the number of clients served, targeting new types of clients, or both. Where state-level initiatives are being implemented, the proportion of the state’s child welfare population being served under managed care ranges from 1 to 80 percent, with a median of about 6 percent. For example, Michigan’s family preservation initiative is serving about 165 children at selected sites around the state, or 1 percent of the state’s child welfare population. At the other extreme, the three statewide initiatives in Kansas are unique in their breadth and geographic scope, covering all 6,000 children in the state’s foster care, adoption, and family preservation programs, representing 80 percent of the state’s child welfare population. Excluded from Kansas’ initiatives are children and families involved in a CPS investigation, a small number of noncustodial families receiving in-home services, and the juvenile offender population. More typically, Georgia’s statewide effort serves 660 children who are in therapeutic residential settings, or 3 percent of the state’s child welfare total. In the local initiatives, the proportion of a county’s or district’s child welfare population currently being served under managed care covers the full range, from less than 1 percent to the entire child welfare population, with a median of 20 percent. At the lowest extreme, Oneida County in New York has just begun its countywide initiative, having so far served 7 children, or less than 1 percent of the county’s child welfare population; the county expects to serve 120 children by the end of the year. At the highest extreme, Jefferson County in Colorado has brought its entire child welfare population of about 1,700 children under its managed care initiative. A more typical example can be found in Albany County, New York; the 1,750 children receiving preventive services represent 20 percent of the county’s child welfare population. Public child welfare agencies have approached both the overall design of their managed care initiatives and the distribution of roles and responsibilities among initiative participants differently. Nonetheless, most of these agencies now contract with private entities to coordinate the provision of an array of child welfare services and to assume administrative tasks, such as processing claims and monitoring program activities. We found that the organizational arrangements among public and private entities implementing managed care generally fall into four models—public, lead agency, administrative services organization, and managed care organization. The public model represents the least change for public child welfare agencies. While the public agency continues its role of coordinating care for children and families as well as providing services, it is both changing the way it reimburses service providers and introducing performance standards. For example, public agencies can incorporate fixed payments into existing contracts with community-based service providers. Currently, 10 of the 36 initiatives are using the public model of managed care. Boulder County in Colorado began implementing its Integrated Managed Partnership for Adolescent Community Treatment (IMPACT) initiative in July 1997. Three county agencies—social services, mental health, and youth corrections—jointly formed a new managed care entity to perform gatekeeping, assessment, case planning, concurrent service utilization review, and quality assurance functions for children placed out-of-home. The new public entity merged funding from the three county agencies and currently performs joint programming and placement decision-making for adolescents in need of out-of-home care in group or residential treatment settings. Although the county agencies now take an integrated approach as care coordinator, they continue to rely on the same network of public and private providers for direct services. Boulder County’s new public managed care entity receives funds from the state in the form of a block grant. This is, in effect, the public agency’s capitated payment for the services it provides to adolescents. In turn, the managed care entity intends to contract with community-based adolescent placement providers on a “subcapitated” basis—that is, the public managed care entity will pass on capitated payments to subcontracted service providers. Finally, the state has introduced a series of performance standards. Figure 2.1 illustrates the organizational arrangement of Boulder County’s managed care initiative. Over half of the 36 initiatives—19 in all—are using the lead agency model to incorporate managed care elements into their child welfare systems. In this model, the public agency contracts with a private entity that, as the primary contractor or lead agency, assumes new responsibility for coordinating child welfare services for a defined population of children and families. The lead agency’s case management functions can include assessing clients’ needs, developing treatment plans, and monitoring progress toward achieving permanency or treatment goals. In addition, the lead agency is responsible for providing all the necessary services, as prescribed by the treatment plan. In this model, the lead agency provides some or all direct services itself but may also subcontract with a network of local service providers. Sarasota County in Florida is using a lead agency model. In January 1997, the state child welfare agency transferred to the Sarasota County Coalition for Families and Children responsibility for coordinating the care of all children and families in the child welfare system in this one county. Formed specifically for this managed care initiative, the Coalition’s members are primarily community-based, nonprofit entities and include all major service providers in the county. One Coalition member—the YMCA—has been designated the lead agency responsible for managing the project and contracting with the state public child welfare agency. Where the state previously had separate contracts with each service provider, the state now has only one contract with the lead agency which itself developed new subcontracts with the Coalition service providers. The lead agency is responsible for (1) performing administrative tasks, such as disbursement and accounting of state-allocated funds and preparing required reports; (2) monitoring the quality of services provided by Coalition subcontractors; and (3) providing some direct services. Case management functions have been subcontracted to two other Coalition service providers. Of the ongoing managed care initiatives nationwide, Sarasota County is one of only two locations in the country that have contracted with private entities to manage essentially the entire child welfare system. Figure 2.2 illustrates the organizational arrangement of Sarasota County’s managed care initiative. Under an administrative services organization (ASO) model, the public child welfare agency contracts out the administrative or management services, including such activities as billing and reimbursement, development or operation of a data system, training, and technical assistance, separately from the service-delivery tasks. The administrative contractor, or ASO, is not responsible for the delivery of child welfare services to children and families directly. In this model, currently used in three initiatives, the management of clients’ care and provision of direct services are the responsibility of the public agency or a lead agency in the public or lead agency model, respectively. For example, Massachusetts’ Commonworks program is a statewide initiative whereby the state public child welfare agency has contracted with a for-profit behavioral health managed care organization to function as the ASO. On behalf of the state agency, the ASO’s responsibilities include monitoring and reporting on the use of Commonworks services; implementing and monitoring an overall program quality evaluation and improvement system; developing and managing an information system for the entire program; developing the overall financial reporting system; developing, implementing, and managing the billing and payment system; tracking complaints and grievances from clients, families, providers, and the community; and reviewing, monitoring, and reporting on the credentialing of all direct service providers. The ASO does not provide any direct services to children and families. The state agency contracts with a lead agency in each of six regions across the state to develop service networks and coordinate the care of adolescents in group home and residential treatment settings. The six lead agencies are responsible for coordinating the care of Commonworks’ youth in their respective regions. Although the lead agencies have no compensatory arrangement with the ASO, each lead agency has a formal agreement with the ASO to discharge its respective responsibilities related to credentialing, utilization review, quality improvement, training, reporting, information systems, payment processing, and care authorizations. Figure 2.3 illustrates the organizational arrangement of Massachusetts’ managed care initiative. The managed care organization (MCO) model is most similar to managed care arrangements in health care and is being used in 4 of the 36 child welfare initiatives. Under this arrangement, the public child welfare agency contracts with a private organization to perform administrative services and assume responsibility for developing and subcontracting with a network of service providers. The difference between the MCO and lead agency models is that the MCO does not itself provide services directly to children and families. Rather, the MCO arranges for the delivery of all necessary services through its provider network. The largest MCO managed care effort under way is in Hamilton County, Ohio. In January 1998, the county child welfare agency contracted with a national for-profit MCO to (1) manage care for children and families in need of either outpatient mental health services or foster care in therapeutic, group home, and residential treatment settings and (2) build an integrated information system for three county agencies—child welfare, mental health, and alcohol and drug addiction. To carry out its administrative responsibilities, the MCO has subcontracted with four other companies to manage outpatient mental health services, maximize Medicaid funding, provide the hardware for the information systems, and manage the software and network. To fulfill its service-delivery responsibilities, the MCO has subcontracted with 22 local service providers. The MCO serves as the organizer, gatekeeper, and manager of services, but is itself precluded from providing any direct child welfare services, unless no other provider can offer the same service and the county agency approves. At the end of 5 years, the county intends to assume the operation of the managed care initiative. Figure 2.4 illustrates the organizational arrangement of Hamilton County’s managed care initiative. Most of the 36 managed care initiatives currently under way are using some version of either the lead agency or public model, as shown in table 2.2. These organizational arrangements tend to rely solely on the community-based nonprofit providers that have traditionally served children and families in the child welfare system. In some lead agency model initiatives, public agencies stipulated in the request for proposals that only those prospective bidders with previous experience in providing child welfare services within the state or locality were eligible to submit bids. Used less frequently, the ASO and MCO models introduce a new type of organization to the child welfare system—a management entity that generally is not itself in the business of delivering child welfare services. In the three initiatives implementing the ASO managed care model, one or more lead agencies are responsible for coordinating and providing direct services. For-profit companies, including organizations with experience providing managed physical or behavioral health care, have not had a major role in developing and implementing managed care initiatives in child welfare. Of the 27 initiatives about which we obtained more detailed information, 10 currently use for-profit organizations as a service provider, managed care entity, or both, as shown in table 2.3. For-profit companies are providing traditional direct services in five of the initiatives using the public model and in one of the initiatives using the ASO model. The companies function as a lead agency in the two initiatives using the lead agency model and the other initiative using the ASO model. Many of these for-profit entities had historically provided services to children and families in their community before the managed care initiative. For example, in Massachusetts’ Commonworks program, a for-profit company that was already a primary provider of mental health services was the successful bidder for one of six regional lead agency contracts. Having little or no experience in the child welfare system, some for-profit managed care companies are functioning as ASOs or MCOs. In Milwaukee County’s Safety Services Program in Wisconsin, a behavioral health managed care company linked with two nonprofit, community-based agencies to form a new entity to which the state awarded a contract to become one of four lead agencies. For-profit managed care companies are functioning as the MCO or ASO in three of the five initiatives using these models. Child welfare agencies have implemented capitated payment systems and new service-delivery strategies in their managed care initiatives. In these new financial arrangements, public agencies are developing methods to limit risk and to establish the capitated payment rate. In addition, public agency contracts with providers now require the providers to organize and coordinate a full array of services. With these changes, public agencies are using different approaches to hold providers accountable for outcomes through performance standards and by linking financial rewards and penalties to outcomes. Public agencies are implementing capitated payment systems in managed care initiatives, although they still maintain fee-for-service reimbursement methods in some of their contracts with service providers. States and counties have used a number of different methods to develop capitated contract rates and have created strategies that shift some or all financial risks to the private sector. Finally, some public agencies have pooled funds from different agencies to increase their flexibility to provide multiple services to hard-to-serve children. In 19 of the 27 initiatives about which we obtained detailed information, payments to providers serving children and families under managed care are fixed. In nine initiatives, payment takes the form of a case rate, or a fixed dollar amount for each referred client, and covers contracted services for all clients in the caseload regardless of the extent to which these services are used. For example, in Kansas’ foster care managed care initiative, the state pays each lead agency a case rate for each referred child averaging about $13,850 a case, which is expected to cover the complete operation of the out-of-home care system, including food and shelter, child care, mental health treatment, independent living, reunification services, and case management for the children in foster care, as well as all recruiting and training of foster parents. Eight initiatives use a capitated rate—a fixed payment for all contracted services for a defined client population, such as those residing within a designated geographic area, with no limits on duration of care. Unlike the case rate, each new client does not generate a new stream of dollars; rather, the capitated rate is fixed regardless of how many children are served. In Sarasota County, Florida, for example, the lead agency receives a capitated rate of about $4 million, over a 1-1/2-year period, for providing all the child welfare services that any county resident in the state’s child welfare system might need. A single entity that assumes the total cost of providing a defined scope of services to a defined population of potential users over a specified time period inevitably assumes financial risk as well. Of the 19 ongoing initiatives using capitated payments, only 2 initiatives—the Sarasota County initiative and the Lake and Sumter County initiative in Florida—have transferred full financial risk to the respective private lead agency. In nearly all the remaining capitated arrangements, public and private agencies alike have created risk-limiting mechanisms to address the unknowns associated with the size of the population needing services and the scope and duration of those services. Absent good historical data on service use and costs as well as experience with a capitated payment system, public child welfare agencies have explored different ways to limit the financial risk carried by the initiative’s service providers. We found that public agencies are using two approaches—one fixes only a part of the provider payment and reimburses some services through the fee-for-service method, and the other uses specific contract provisions to limit the size of the financial risk assumed by the providers. In five initiatives, public agencies have established a fixed rate for only a part of the contracted services. Services not included under this partial fixed rate are reimbursed on a fee-for-service basis. For example, in the Kids Oneida project in Oneida County, New York, the lead agency receives a prepaid, monthly case rate of $2,500—amounting to $30,000 a year—which is expected to cover the full range of services needed to keep children and adolescents with serious emotional, behavioral, or mental health issues in the community or at home. However, the federally eligible behavioral and mental health services that the lead agency provides or purchases may be billed for Medicaid reimbursement on a fee-for-service basis. According to our survey of ongoing managed care efforts, seven initiatives include contract provisions that limit providers’ losses when costs exceed the fixed rate. Mechanisms to limit a provider’s losses can take several forms, including those where the public agency and provider share excess costs and others where the public agency bears the excess costs alone. One risk-sharing mechanism limits the provider’s losses to a percentage of total costs, with either public dollars alone paying any further costs or the provider sharing some of the cost overruns. For example, during the first year of Kansas’ foster care managed care initiative, the lead agency contracts contained a 10-percent margin so that the contractor was responsible for all additional costs up to 10 percent above the case rate and the state was to pay for all excess costs beyond that. Under this arrangement, the lead agency would pay all costs up to 110 percent of the case rate and the state would pay anything over that amount. Other techniques address the serious financial burden associated with the catastrophic costs of certain groups of children or circumstances beyond the provider’s control, such as an unexpected increase in the number of abuse and neglect reports. One technique is to set aside a pool of funds, known as a risk pool, for use if the cost of care exceeds some targeted amount. For example, in Champaign County, Ohio, the county and its foster care contractor created a risk pool set at a value equal to 10 percent of the contract value, or about $24,000. The contractor funds 40 percent of the pool, and the county funds 60 percent. The risk pool covers catastrophic costs over $32,400 per child, so that costs in excess of this amount are charged to the risk pool; however, once the risk pool is depleted, the county pays the cumulative costs over the $32,400 per child amount. To prevent a fixed payment system from penalizing extraordinarily hard-to-serve children, some managed care initiatives exempt these populations from their rates. This can be accomplished by either not referring certain clients to the managed care entity or allowing the costs associated with serving only these clients to be reimbursed on a fee-for-service or per diem basis. Without such exemptions, the prohibitively high service costs for these clients could place such a drain on the managed care entity’s resources that it would not be able to provide the caliber of services these hard-to-serve children need. Furthermore, serving these high-need children could significantly reduce the resources available to serve more typical clients. For example, in both Kansas’ foster care managed care initiative and Boulder County’s managed care initiative in Colorado, developmentally disabled children are excluded from the population served by the respective managed care entities. These children have unusually high service needs, including hospitalization and in-home medical care. Moreover, in Kansas, each lead agency can now exclude a designated number of referred cases—ranging from 24 to 35—for which it has 60 days to decide whether a referral will be served outside the case rate on a fee-for-service basis. This change was effected to ameliorate financial problems lead agencies experienced while serving children with high service needs within the contract case rate, which was initially developed without data on actual service costs. Financial risk is also associated with uncertainty about the size of the population to be served. One mechanism to limit this type of risk is for the public agency to guarantee the provider a minimum or maximum number of cases. According to our survey of ongoing managed care initiatives, seven initiatives have such measures in place. For example, in the Kids Oneida initiative in Oneida County, New York, the lead agency must accept all county referrals as long as its capacity has not been reached. Setting capitated payment rates for child welfare services was a new exercise for states and local agencies. Consequently, they used a variety of methods. In some initiatives, the public agency used the competitive bidding process as the forum to negotiate a contract rate for services. In Kansas’ foster care managed care effort, for example, each of the five regional lead agency contracts has its own case rate. The state provided prospective bidders with a dollar amount that was based on previous expenditures for purchased services, staff, other operating expenditures, and services funded through other parts of the agency, such as mental health and child care. Equipped with this information, private entities vying for the lead agency contracts proposed their own annual case rates covering a 4-year period. These rates then became the case rate for those successful contractors. In Massachusetts’ Commonworks project, the state negotiated a 3-year contract rate with the managed care company that was awarded the ASO contract; the rate was based on the annual projected operating costs for the specified administrative services. In other initiatives, the public agency used historical data to set the contract rate and did not, as a rule, negotiate the rate with the contractors. For example, in Florida’s managed care initiative in Lake and Sumter Counties, the lead agency’s case rate was developed by examining the actual cost for the care of children sheltered in Lake County over a 2-year period, including multiple cost categories, such as out-of-home care, administrative services, and therapeutic services for children and families entering care. Costs were also categorized according to whether children were entering foster care or receiving only protective services, and bundled into a single 2-year case rate. In Massachusetts, the state’s contract with the lead agency in each of six regions across the state has two components. One is a negotiated annual fee for the lead agency to coordinate and oversee its service provider network and is intended to cover staff costs and training. The second component is a case rate for direct services that is the same for each lead agency and, for the first year, was partly based on an actuarial model using 1995 expenditure and utilization data for youth in group care, and also including an educational subsidy. In other initiatives, the state allocated a portion of its child welfare services budget directly to the managed care entity. Such was the case in Colorado, where the state allocates child welfare funds to counties through a block grant. In Boulder, Jefferson, and Mesa Counties, the block grant, in effect, created the capitated contract rate for each county’s public managed care model. For the Sarasota County managed care initiative in Florida, the state carved out the county’s share of a multicounty district’s annual operating budget, generally using the same methodologies it uses to allocate budget line items to districts. The allocation method is based on caseload size; actual dollar amounts from about 14 different budget components, including out-of-home care, sexual abuse treatment, and independent living services; and operating expenses and salaries associated with the cost of 37.5 public positions. Once the county’s budget allocation was determined, the state withheld about 10 percent of the total to fund an evaluation and seven new public positions in the county to perform quality assurance and title IV-E eligibility determination functions. The remaining budget dollars became the lead agency’s contract rate. Regardless of the method for setting rates, the public agency often expects the contractor to supplement the contract rate with funds from other sources, especially federally reimbursable programs such as Medicaid. For example, in Massachusetts’ Commonworks program, the lead agencies are expected to tap Medicaid and state education sources, as well as private funding sources, to fund related services. The state also encouraged the lead agencies to enter into public-private partnership agreements to help subsidize the cost of coordinating their provider networks because the state will reimburse no more than 75 percent of these costs, or a maximum of $100,000. For Wisconsin’s Safety Services Program in Milwaukee County, the state expects each of the four lead agencies to supplement state funding by referring clients who are eligible under Medicaid to their HMO for applicable services and securing Medicaid funds for targeted case management to partially support staff costs. The child welfare system relies on many different programs from multiple agencies with many funding streams. Because of restrictions on eligibility and prohibitions on certain uses of funds, public and private agencies often face problems accessing needed services. To reduce service access problems sometimes associated with these categorical programs and increase flexibility in the use of funds, agencies in some locations have agreed to pool or blend funds from various sources for their managed care initiative. In four states where counties administer the child welfare system, the state distributes child welfare funds to county child welfare agencies through a capitation method that fixes the level of funding, for example, a block grant. “Blockgranting” state funds in this way loosens the restrictions on the use of the funds and thus increases counties’ flexibility. In Colorado, for example, the state’s capped allocation to counties now typically includes categorical child welfare budget line items for out-of-home placement, subsidized adoptions, and child care and county administrative costs related to child welfare services. Boulder County’s managed care initiative—serving adolescents at imminent risk of placement in group or residential care referred from the child welfare, mental health, and juvenile justice systems—further pooled its capped child welfare allocation with funding from the mental health agency and youth corrections agency to finance its IMPACT initiative. Funding from both child welfare and mental health funding streams are often pooled, especially for the hard-to-serve and high-cost children. Under its public managed care model, the Wraparound Milwaukee program in Wisconsin blended Medicaid, child welfare, and federal grant funds into a single buying pool to purchase individualized, family-based services to help children placed in residential treatment centers return to their family, a foster home, or other living arrangement in the community. The county child welfare agency agreed to pay a monthly case rate of $3,300 per child out of its institutional placement budget. In addition, the state health care financing agency agreed to pay a monthly case rate of $1,459 to pay for all mental health and substance abuse services for Medicaid-eligible children, who make up about 80 percent of the initiative’s clientele. Along with a federal grant from SAMHSA’s Center for Mental Health Services, these child welfare and Medicaid dollars form the funding pool from which the public managed care entity pays the cost of residential treatment, group and foster care, and all other services, except physical health care, which is outside the capitated contract rate and still obtained by families on a fee-for-service basis. Managed care contracts usually require the managed care entity to provide, create, or purchase a wide range of services to meet clients’ needs. If not providing services directly itself, the primary contractor may develop and subcontract with a network of service providers to make available all the services referred clients might need. Under managed care, public agencies have increasingly privatized case management tasks, such as treatment planning and case monitoring, and expect the managed care entity to serve as the single point of entry to the service system. States and localities have retained certain functions that officials believe are critical to meeting their legal responsibility for the safety of children in the child welfare system. Although most of the 27 initiatives have transferred case management responsibilities to private entities, public agencies maintain a large presence during strategic points in a child’s service history. These include the points at which a child enters and exits the child welfare system and when key decisions are made about changes in a child’s service plan. Managed care initiatives are trying to better coordinate care and ensure access to a wide range of services to address concerns about service fragmentation and gaps that have historically plagued the child welfare system. Where public agencies have contracted out the care coordination function to a lead agency or MCO, that primary contractor assumes responsibility for ensuring the availability and provision of all contracted services as well as any additional services that may be necessary to meet individual client needs. If a child or family has a unique service need that traditional services cannot meet, the primary contractor must develop new strategies to meet it. Whether using existing services or creating new ones, the primary contractor—regardless of whether it is a public agency in the public model or a private organization in the lead agency model—can either deliver services itself or contract with a network of service providers. The creation of an organized and coordinated network of service providers is the foundation of managed care initiatives. These configurations of service providers have been formed in one of two ways—through a self-initiated process prior to the formation of the managed care project or as a required component of the initiative itself. The self-initiated process often begins with a group of community providers establishing itself as a service coalition or consortium in anticipation of state or county reform. By either becoming an MCO itself or designating one of its members as the lead agency to perform management services, a provider group assumes responsibility for coordinating the care of a defined population of children and families. In Indiana’s The Dawn Project in Marion County, four community mental health centers in the county formed a new nonprofit MCO. The MCO contracts with case managers and service providers who collectively provide or develop the necessary services for children and youth with serious emotional disturbances who are already in or at risk of out-of-home placement. In the lead agency model being used in Sarasota County, Florida, all the major local vendors that had traditionally provided contracted child welfare services, such as parenting classes, therapy, in-home visitation, family support services, therapeutic foster care, and residential services, formed a coalition; one member—the YMCA—serves as the lead agency, and the coalition operates as the provider network. In still other initiatives, the primary contractor had to build a network of service providers as a condition of its managed care contract with the public agency. In both Massachusetts’ Commonworks program and Tennessee’s Continuum of Care contracts, for example, the primary contractors are responsible for forming a network of either existing or new providers and, through this network, providing all the appropriate services to meet the needs of clients accessing the network. Under child welfare managed care arrangements, contracted private providers are attempting to develop coordinated networks of service providers and are assuming more case management responsibilities. While some private entities were already performing some case management functions before these initiatives were implemented, in 21 of the 27 ongoing managed care initiatives, the public agency has shifted more case management responsibilities to private contractors. In the lead agency and MCO models, in particular, the private sector is now performing such case management tasks as developing the treatment plan that identifies the treatment goals and needed services, as well as arranging for the provision of these services either by the case manager’s own organization or through the provider network. Case managers also track clients’ progress toward achieving their treatment goals, assess the appropriateness of each service, and update the treatment plan, as needed. In an effort to better match services with client needs, many of the ongoing initiatives use a team approach to case management to avoid the duplication, time delays, and fragmentation that often result when different service systems are not involved in the treatment planning and decision-making process. In some initiatives, the treatment team consists of those individuals who are regularly in direct contact with the child, including the case manager, therapist, parents or guardians, school officials, and other service providers, depending on the child’s problems. Together, the treatment team develops an individualized service plan and reviews, revises, and implements any necessary changes. For example, in Sarasota County’s managed care initiative in Florida, where the lead agency subcontracts with two coalition service providers to establish case management teams, the subcontractor provides a case manager and a therapist, who bring in the foster parents, and, when appropriate, a guardian ad litem, to work exclusively with families and complete assessments, case plans, concurrent planning, and case reviews. Having both a case manager and therapist on the team means that mental health services are routinely made available to each family and child in care. In other initiatives, case management teams are interdisciplinary with representatives from multiple agencies to better address the varied needs of hard-to-serve children who often have long histories of involvement with multiple agencies. For example, the managed care initiative in Boulder County, Colorado, is an interagency collaboration under the public managed care model. The local child welfare, mental health, and juvenile justice agencies formed a new managed care entity to perform joint gatekeeping, assessment, placement case planning, concurrent utilization review, and quality assurance functions for the targeted population of adolescents in need of group or residential placement. Youth in need of such a placement are referred to an interagency team comprising public agency administrators from child welfare, community corrections, health, community services, mental health, youth corrections, and probation, and facilitated by staff from the managed care entity. This team makes the final decision on whether and where the referred adolescent should be placed and reviews the child’s progress every 90 days. The day-to-day case management responsibilities are handled by the managed care entity’s intensive case managers, who monitor and assess the adolescent’s movement toward placement goals in conjunction with the county child welfare caseworker. Although public agencies are privatizing the management and coordination of care for children who are victims or at risk of abuse and neglect, they continue to retain certain tasks they believe are critical to meeting their legal responsibility for the safety and well-being of children in the child welfare system. In all 27 initiatives, the public agency continues to conduct all CPS functions related to investigating reports of child abuse and neglect and recommending to the courts whether a child needs to enter the child welfare system for protective or any other services. A child enters the managed care system on the basis of a referral from the public child welfare agency to the managed care entity. To ensure that managed care providers do not deny access, some contracts explicitly state that the primary contractor can neither reject a referral nor eject an accepted case. The public agency also maintains its presence by participating in the primary contractor’s treatment planning process and requiring approval when the contractor decides to make changes in the level of care, such as moving a child from residential care to family foster care. In several initiatives, public agency staff are members of the case management team that reviews, revises, and implements the child’s treatment plan. The public worker’s role on the treatment team in some initiatives is to review and approve the case manager’s service plan for the child and family, any significant deviations from the plan, and decisions calling for discharge from care or transfer to a different level of foster care placement or to an in-home service provider. For example, in Sarasota County’s managed care initiative in Florida, a public worker attends all case review meetings where case plans may change significantly, including the treatment goal and decisions to discharge a case or pursue termination of parental rights to free a child for adoption. This worker does not participate in case plan choices or recommendations, but is there to listen, observe, and intervene when he or she perceives a child’s safety may be jeopardized because of the case review decisions. While primary contractors are generally free to subcontract any of the contracted services, the public agency can exercise some control over these subcontracts. For example, in Massachusetts’ Commonworks program, the state requires that if a lead agency also administers a residential care program, then at least 75 percent of placement services must be subcontracted out to avoid conflicts of interest regarding placement decisions among direct service providers in the network and to help maintain diverse programming and placement options. In several initiatives, the public agency controls which agencies are in the provider network. In the TrueCare Partnership initiative in Hamilton County, Ohio, for example, the MCO has contracted and negotiated reimbursement rates with local service providers to carry out its service-delivery responsibilities; however, the county selected the first group of providers through a competitive process and must approve any elimination or addition to the provider network. Public agencies are developing strategies to protect against the inherent incentive in managed care to withhold or provide reduced services. These strategies attempt to hold managed care providers accountable for achieving the outcomes public agencies are pursuing by setting performance standards and linking financial rewards and penalties to outcomes. In addition, public agencies plan to evaluate the effectiveness of their initiatives to determine whether managed care is accomplishing the desired objectives and resulting in efficiencies. Public agencies have instituted performance standards as a means of holding their contracted service providers accountable for outcomes. In 23 of the 27 managed care initiatives about which we obtained detailed information, the public agency requires service providers to meet specific performance standards. In a majority of these 23 locations, the public agencies were using performance standards in contracts before implementing managed care initiatives. However, in most cases, these agencies are now incorporating performance standards into more contracts. In addition, service providers in eight ongoing initiatives are being held accountable for their performance for the first time. Public agencies are using multiple performance standards to balance the twin goals of controlling costs while ensuring quality of care and the overall safety of children. Taken together, a set of goals and objectives with related performance standards can help ensure desired outcomes are achieved. While some standards are based on system outcomes, such as reducing overall costs of placements or achieving permanency more quickly, other standards are more client-specific and appropriate for every child, such as ensuring that immunization schedules are met. Moreover, some standards address cost efficiencies and the savings associated with speedier exits from foster care or transfers to less costly but appropriate levels of care. For example, if a goal is to achieve cost efficiencies, the performance standard may call for decreased residential care costs shown as a percentage less than an established baseline amount. Or, if an objective is to ensure that children are reunited with their families in a timely manner, a performance standard may require that a percentage of children in out-of-home care be returned to their families within a specified period of time. Other standards focus on the quality of care, which is often measured by whether children remain in safe, stable settings and their well-being is sustained and nurtured. For example, in addressing children’s safety, the performance standard could prescribe a percentage of all children that are returned to their families with no findings of maltreatment for a specified period of time. Furthermore, an indicator of placement stability in out-of-home care could be a standard setting the maximum number of times a percentage of children experience changes in their placement setting. In addition, promoting families’ well-being could manifest itself in a standard of a certain percentage of families that show improvement in parenting skills and capacity. Public agencies in most ongoing managed care initiatives are incorporating multiple performance standards for their service providers, as illustrated for Kansas’ foster care managed care initiative in table 3.1. Some initiatives offer bonuses as a financial incentive for the managed care entity to meet performance standards. For achieving cost savings or successfully returning a child to the family, for example, the contractor can earn additional funds in several ways. In Massachusetts’ Commonworks initiative, the lead agency receives a bonus payment of $1,000 for each youth who has been discharged for 6 months and not readmitted to the program during that period. In the TrueCare Partnership initiative in Hamilton County, Ohio, the MCO can earn two kinds of performance bonuses. First, the MCO can earn as much as about $100,000 per year in bonuses for meeting 20 individual performance indicators related to (1) service outcomes for referred families, such as improved functioning, timely receipt of behavioral health services, and success in ensuring children’s safety and reducing risk of harm; and (2) management services, including maintenance of a competent provider network, revenues maximized, and client satisfaction with network providers’ services. Second, the MCO can earn an additional bonus up to a maximum of $33,000 a year by meeting all its performance standards and reducing costs by more than 15 percent. Similarly, some contractors can be penalized for poor performance. Continuing with the Hamilton County example, the MCO can incur financial penalties totaling about $63,000 if it fails to meet the various performance indicators related to service outcomes for families and management services. In several initiatives, another disincentive to poor treatment planning and discharging children from care prematurely is to hold contractors financially responsible for those children who must reenter care within a specified period of time. For example, in Kansas’ foster care managed care initiative, the lead agency must pay for all costs if a child who was returned to the family reenters foster care within 12 months of the discharge. Hence, while the lead agency initially received a fixed case rate for each referred child, the state provides no additional funds beyond this amount if the child must once again be removed from the family during the year following discharge. Another method public child welfare agencies use to help ensure that managed care entities do not inappropriately limit the amount or types of necessary service to children and families is to restrict profit levels or require that cost savings be reinvested in services. Ongoing managed care initiatives limit contractors’ ability to profit at the public’s expense in several different ways. First, in Massachusetts, state regulations limit surplus revenues to 5 percent for the lead agencies and their network providers in the Commonworks managed care initiative. Second, contract language can limit a provider’s gains with a risk-sharing mechanism similar to the one that limits losses. In the Kansas foster care initiative example, where the lead agency was financially at risk for up to 110 percent of the contract rate in the first year, the contract also allowed a 10-percent margin for retaining any cost savings. Under this arrangement, the lead agency could keep any savings after spending 90 percent of the contract rate; any additional savings reverted to the state. Such a limitation on both losses and gains is sometimes referred to as a “risk-reward corridor.” Other managed care initiatives include contract provisions requiring providers to reinvest “profits” in the program. For example, in Sarasota County’s managed care initiative in Florida, the lead agency must reinvest any realized savings in primary prevention programs or enhanced child welfare services. In Boulder County’s public managed care initiative in Colorado, the state stipulates that the county can use up to 5 percent of its capped child welfare allocation to reduce its share of the state-required local match, but any additional savings must be reinvested in additional child welfare services. Under this arrangement, the county plans to reinvest any savings in innovative community-based services to shorten or eliminate the need for residential placement; reward and enhance selected providers’ capabilities to serve adolescents effectively; and develop and maintain a countywide management information system that will integrate clinical, fiscal, and outcome data on children in placement. Finally, public agencies are concerned about the critical time immediately following a child’s discharge from foster care. To better safeguard against reentry into care, some locations are offering providers incentives—in the form of additional funds or as a supplement to the contract rate—to deliver aftercare services to recently discharged children and their caregivers. For example, in the Commonworks program in Massachusetts, the state has incorporated funding specifically for aftercare services into the lead agencies’ case rate during the first year so that, once a child is discharged from foster care, the lead agency’s case rate changes from $4,000 to $400 per month for up to 6 months. External or independent reviews will help determine whether system reforms under managed care arrangements are effective, accomplishing desired objectives, and resulting in efficiencies. Most of the 27 initiatives about which we obtained detailed information intend to collect information to assess the time it takes to achieve permanency goals as well as the cost of providing services under managed care. In addition to these efforts, many initiatives include an evaluation component that will examine project outcomes, performance quality, and cost efficiency in various ways. Some initiatives—for example, both Massachusetts’ Commonworks program and Alameda County’s Project Destiny initiative in California—will conduct independent longitudinal evaluations. A systemwide evaluation is planned in Kansas, where the state has contracted for a 4-year external review of its entire child welfare system, including its three statewide managed care initiatives and services provided by both public and private employees. In some initiatives, public child welfare agencies plan internal evaluations of their managed care efforts. In Tompkins County, New York, the county child welfare agency will conduct an annual program review of its Youth Advocate Program and the lead agency’s services. The objective is to determine the extent to which client milestones and targets are achieved, and to use the program review results to help modify program goals and performance standards. Other initiatives will be evaluated by the managed care entities themselves, as in Wisconsin’s Safety Services Program in Milwaukee County, where the contracted lead agency is responsible for designing and implementing a 2-year plan, subject to the state’s approval, to evaluate program effectiveness and the quality of the services delivered. A final approach is to measure the extent to which foster and biological parents as well as older children are satisfied with the services they receive under managed care arrangements. As part of either an independent evaluation, performance standards, or ongoing quality assurance monitoring, collecting information directly from clients will help identify outcomes related to managed care’s effect on children and families. For example, in Boulder County’s managed care initiative in Colorado, client satisfaction data will be collected from adolescents and families served—through focus groups of those receiving services under the initiative—as a component of the overall quality assurance plan. This information will add qualitative texture to the quantitative outcomes and assessment data. Public officials considering managed care for their child welfare system face three difficult challenges. First, when implementing managed care, public agencies have found that they need to accomplish a number of tasks, of which developing a capitated, prospective payment system is most crucial. Second, client service and outcome data are critical to setting adequate payment rates and monitoring both client and provider outcomes. Developing the management information systems needed to store and retrieve these data represents a difficult task for public agencies. Third, managed care requires both public and private agencies to assume new roles and responsibilities. Staff from both sectors must alter long-standing practices and develop new skills. Despite these challenges, public officials are encouraged by early—though limited—positive results. Before managed care arrangements can be implemented, both public and private agencies have found they need to accomplish a number of start-up tasks. First and foremost, public agencies have sought solutions to the fiscal challenges of developing a prospective, capitated payment system when the major federal source of support for child welfare services— particularly foster care—is the service reimbursement method of title IV-E. Second, in developing their managed care initiatives, public agencies have brought together key participants in the system—many of whom have little experience in such joint efforts. Third, because applying managed care principles to child welfare is new, public agencies have developed different strategies to build expertise in this area. Finally, private providers participating in new initiatives need to anticipate significant start-up costs that may not be covered in their contracts. In a managed care environment, the use of prospective, fixed-payment arrangements between public agencies and private service providers can be difficult and presents states with fiscal challenges because the federal government retrospectively reimburses states for many child welfare services. The managed care environment—in which public agencies pay service providers in advance of services but obtain reimbursement from federal title IV-E funds only after the services have been delivered—strains public agencies’ ability to maintain an adequate cash flow. In some states as well, state law mirrors federal law and prohibits advanced payments from the state’s funding category for out-of-home placements. States have found ways, however, to address this problem. Some states, for example, initially make nonfederal funds available for advance payments to managed care entities. In Massachusetts, the state advances general revenue dollars to the Commonworks’ lead agencies and later replaces the advances with reimbursements from the federal foster care and Medicaid programs. Public agency officials also expressed concern that federal prohibitions against the use of title IV-E funding for services other than out-of-home care may increase the state’s liability for funding a greater share of capitated contracts. When a child returns home, federal reimbursement for foster care costs ceases; however, the child and family may continue receiving unreimbursable in-home services. As managed care entities provide aftercare services and become more successful at returning and keeping children at home, the state’s portion of the contract rate will increase as the federal share decreases. In this scenario, the state will also realize savings in its out-of-home costs; however, these savings may be more than offset by the state’s obligation to continue paying contractors for in-home services under the fixed rate. This was the case in the Lake and Sumter County managed care initiative in Florida, where the state was paying the lead agency a case rate of about $15,000 over a 2-year period for each child entering foster care. The state based this rate on a daily cost of about $21 per child and expected to finance this rate, in part, by submitting claims for federal title IV-E foster care reimbursement. However, the lead agency had the flexibility to use its case rate dollars to fund treatment and in-home services, and sometimes returned children home or completed successful adoptions in less than 2 years; the state was then left paying more of the daily per diem amounts with its own funds and could no longer claim federal title IV-E reimbursement because the children were no longer in out-of-home care. Finding this financial risk unacceptable, the state abandoned the case rate after a year in favor of a capitated rate for the entire caseload and not for each referral. Of the 13 states where initiatives are currently being implemented, HHS has waived the categorical funding restriction in title IV-E for one state. Ohio secured a federal waiver to receive a quarterly block grant of title IV-E funds that can be spent on in-home services, such as home-based therapy and other community-based support services. Fifteen counties volunteered to participate in this demonstration, including Hamilton County where the county has contracted with an MCO to manage care for children and families in need of therapeutic, group, and residential care. A few of the 10 states that have received an HHS waiver of certain title IV-E restrictions plan to use it in part to implement managed care initiatives. Furthermore, other states with ongoing managed care initiatives, including Kansas and Florida, are now seeking similar relief from title IV-E restrictions and have submitted waiver proposals to HHS. Once a financing system is established, agency officials have found that adjustments are often necessary as they gain experience with managed care. For the second year of Kansas’ foster care managed care initiative, for example, the state modified the lead agencies’ risk-sharing mechanism to offset an increase in the lead agencies’ case rate. Unlike the first year of the contract, lead agencies’ financial liability is no longer limited by a 10-percent margin; instead they must pay all costs above the case rate. In addition, each lead agency can now exclude a designated number of referred cases—averaging about 3 percent of each lead agency’s total caseload—from the case rate and bill the state for those cases on the more traditional fee-for-service basis instead. According to state officials, these changes were necessary because of potential cash flow problems associated with the risk-sharing mechanism and the realization that the cost of serving some children and families was higher than the initial case rate anticipated. The payment method was changed altogether in the Lake and Sumter County managed care initiative in Florida, although the rate is still fixed. Because the state found the case rate too costly when the number of children requiring a foster care placement grew at a steeper rate than anticipated, the state changed the payment method to a capitated rate—that is, a fixed fee that is no longer linked to each child the state refers to the lead agency but instead covers the estimated total number of children residing in the two counties who may require foster care services. The child welfare system includes many different individuals and groups—such as public and private agencies, courts, community organizations, child advocates, and foster parents—all with different roles and perspectives. Public agencies have found that involving these key participants to build consensus concerning program design issues is an important step in developing managed care initiatives. In hindsight, program officials agreed that more inclusive, early involvement would have helped address misconceptions and reduce tensions surrounding managed care and would have facilitated program implementation by ensuring that stakeholders were informed about and supportive of the planned system reform. While public agencies took steps to involve key stakeholders when the managed care initiatives were being developed, officials agreed that they could have done better in this regard. For example, Boulder County’s IMPACT initiative in Colorado is built on a premise of interagency collaboration among the various public agencies that serve the targeted population of adolescents in need of group care or residential treatment. Although the directors from each of these agencies are stakeholders in the public managed care entity, setting broad policy and procedures, caseworkers were not involved in the initiative’s development, resulting in some duplication of efforts between agency caseworkers and the new intensive case managers. In Sarasota County, Florida, on the other hand, the community-based providers collectively designed the service-delivery model and actively supported the state legislation that authorized the initiative, without the involvement of the state child welfare agency. Hence, according to officials, the state agency was not initially prepared to implement the state legislation and experienced great difficulty in resolving with the lead agency such contract issues as the contract rate, data reporting requirements, and public workers’ role in overseeing children under the lead agency’s care. However, at the state agency’s suggestion, the lead agency convened a stakeholders’ group, including community leaders and business representatives, to provide oversight and advice. This action has resulted in increased community involvement and support for the initiative through donated space and equipment. The courts also play a critical role in determining outcomes for children in the child welfare system, yet they are often a forgotten player in reform efforts. As independent judicial bodies, the courts may view themselves as outside the child welfare service-delivery system and not necessarily bound by the same policies or priorities. However, children in out-of-home care often cannot be transferred to a different level of care or discharged from foster care—both key strategies for controlling costs under child welfare managed care—without the court’s approval. While public agencies have involved the courts as they developed their initiatives, the extent of judicial involvement has not always been sufficient to guarantee support for system reform efforts. In Kansas, public officials acknowledged the lack of adequate judicial involvement in their foster care managed care initiative. According to officials at one lead agency, the local judge has disagreed with some of their recommendations to discharge children from care and, as a result, children are staying in out-of-home care longer and incurring more costs than the lead agency had projected for its case rate. Because managed care in child welfare is new, public agencies have had to develop strategies to find both information and sources of assistance. Information on managed care can be obtained from national associations or provider organizations, private consultants, or internal resources in other public agencies with experience in managed care. For example, in both Massachusetts and Sarasota County, Florida, the state hired a private consultant to help develop various aspects of its managed care initiative, particularly how to set capitated contract rates. In both Massachusetts and Boulder County, Colorado, public agency staff looked at their state’s experience with managed care in behavioral health care—which serves a similar clientele—to learn more about that system’s capitation and service-delivery arrangements. Anticipating the arrival of managed care, private service providers also sought information about the subject to better position themselves—by developing or becoming part of a provider network, for example—as players in states’ system reform efforts, often with assistance from national organizations, such as the Child Welfare League of America. Public agencies have looked to and adapted their own successful practices and relationships as starting points for their initiatives. With its history of interagency collaboration, especially between the child welfare and mental health agencies, Boulder County in Colorado decided to build on this relationship to launch its IMPACT initiative. The county established a new public managed care entity comprising representatives from the agencies that might be involved in the lives of the target population of adolescents in need of group or residential care, such as child welfare, mental health, youth corrections, health, and probation. In Sarasota County, Florida, community-based service providers were active in shaping the managed care initiative, having had a tradition of working together. They formed their own coalition, comprising all major service vendors in the county. Thus, a provider network with a designated lead agency was already in place when state legislation authorized a limited number of community-based pilots. Finally, in Massachusetts, a strong and established service provider network existed in the predecessor to the Commonworks program. The state, therefore, decided to build upon this existing framework and also to contract with an ASO to be responsible for standardizing all operating procedures. Participating in managed care initiatives is a new experience for most community-based service providers. Service providers we interviewed told us they are eager to participate in this new phenomenon because they believe managed care is part of child welfare’s future and do not want to be excluded from initial efforts. As financing for services becomes fixed under new payment arrangements, one critical issue facing community-based providers is the potential financial strain—both from start-up and ongoing operational costs—involved in their new efforts. As they assume new roles as managed care entities, providers often need to hire additional management and frontline personnel as well as purchase buildings, equipment, and other capital to manage both their provider networks and the new caseloads of children and families. However, public agencies do not always make start-up funds available in the new contracts. For example, neither the lead agency contracts in Kansas’ foster care managed care initiative nor the one in Sarasota County, Florida, included start-up moneys. This prevented providers from using the dollars spent on start-up acquisitions to fund a risk pool or required them to seek additional in-kind donations from community organizations. Providing start-up funds can help alleviate potential financial pressure on new managed care entities and enable them to focus more attention on serving and coordinating the care of children and families. This was the case in Alameda County’s small Project Destiny initiative in California, where the county awarded a separate contract to the lead agency, providing funds to support start-up costs, such as those associated with developing the consortium of care providers, training, and any other unanticipated costs. While public agency officials admit that they expect their providers to find other sources of funding, such as Medicaid and charitable contributions, to support their managed care initiatives, some contractors may have difficulty attracting financial support from their usual contributors because of misconceptions about financial arrangements under managed care. In Kansas, for example, the public agency expected foster care contractors to supplement the case rate with their usual in-kind contributions from philanthropic organizations. However, according to one provider, these organizations were at first reluctant to continue their monetary contributions because they had incorrectly assumed that the new managed care contract provided sufficient funds to cover the lead agency’s service costs. Community-based service providers, who formerly contracted with the public agency and now find themselves part of a provider network under subcontract with a lead agency or MCO, are not immune from financial strains under managed care either. In particular, small providers we interviewed expressed concern about their financial viability in a managed care environment when client referral patterns fluctuate and payments are capitated. For example, in Massachusetts’ Commonworks program, lead agencies have developed their own provider network and, in some regions, have expanded the number of providers to make available the full array of services needed by the adolescents in their care. As a result, lead agencies have not utilized the services of some providers as often as the state agency had in the past and, faced with empty beds in the fee-for-service payment method still in place for subcontractors, these underutilized providers lost revenues. In addition, without the capacity to serve a substantial volume of clients as larger providers might, several small community-based providers expressed reservations about the lead agencies’ plans to subcapitate payments and transfer financial risk to providers, especially given the high and costly service needs of Commonworks’ target population of adolescents in need of group or residential care. For managed care initiatives to effectively develop and adjust capitated payment rates, track service use, and monitor program and child outcomes, public agencies realize that client-level data on services and outcomes are needed. However, public child welfare officials believe that developing management information systems is the most difficult task they face. The successful ongoing operation of managed care arrangements is linked to the extent to which public agencies have timely and accurate information on services and outcomes. These data form the basis for two important activities central to managed care. First, as state child welfare agencies move from a process-monitoring environment to a performance-based approach, information on client outcomes is needed to develop and revise performance standards. Second, payment rates must reflect the accurate overall costs for providing services to children and families in the child welfare system. Aggregate client-level information on service use and costs is necessary to establish capitated rates. Although most of the initiatives use performance standards in their managed care contracts, performance-based management in general is a new focus for many child welfare agencies. Setting appropriate standards and determining how to measure performance against those standards can be a daunting task for some public agencies. Massachusetts’ Commonworks program, for example, identified performance goals for treatment planning, recidivism, family functioning, education, and independent living but did not evaluate providers on these outcomes for the first year because the outcome measures had yet to be developed. In conjunction with the state agency, the new ASO is expected to develop the measures and collect baseline information for comparison the second year. As public agencies gain more experience with managed care and develop the capacity to collect better information, public officials recognize that adjustments to existing performance standards and payment rates will be necessary. For example, in Kansas’ foster care managed care initiative, the state set initial performance levels not on the basis of past program performance but on what public agency officials believed could reasonably be expected of the new lead agencies. Realizing that these expectations might be unrealistic, the state chose not to penalize the lead agencies for failing to meet performance standards. Indeed, when the lead agencies fell short of first-year goals, such as reuniting families in a timely manner, the state lowered the standard the following year by narrowing the gap between its original expectation and contractors’ actual performance. As we described earlier, establishing payment rates for providers also requires public child welfare agencies to continually revisit and adjust price levels and, in some cases, risk-sharing provisions as more current information is collected. The absence of quality client and service-cost information can, potentially, delay the implementation of capitated rates and create financial strains for both public and private agencies. In Massachusetts’ Commonworks program, for example, the state delayed for a year applying the case rate and transferring financial risk to the lead agencies. Instead, the state opted to collect and analyze baseline cost and service-use data as well as minimize financial pressures on lead agencies so they could focus on service-delivery issues during the first year. Financial strain was a problem for the lead agencies in Kansas’ foster care managed care initiative because the case rate did not reflect the true cost of serving children with high service needs. On the basis of accumulated first-year cost data, the state modified the risk-sharing formula by increasing lead agencies’ case rates in the second year, eliminating the 10-percent risk-reward corridor, and permitting lead agencies to charge the state actual service costs for a limited number of cases. Program officials responsible for the initiatives we surveyed view the development of a management information system as the most difficult task they face in their move to managed care. We found that public agencies are implementing their managed care initiatives without appropriate information systems in place. In many instances, providers and public agencies are working with multiple and incompatible information systems. For example, in the managed care initiative in Sarasota County, Florida, the lead agency is directly connected to the state’s two child welfare client and service information systems for submitting required management reports, and has its own internal system that is networked with subcontracted service providers to input and track client-level data. Because these three systems are not integrated, lead agency staff must enter duplicate information into each system and physically locate the three computer terminals side-by-side to ensure consistent data. In contrast, Kansas implemented its foster care initiative without an information system in place and relies on handwritten reports submitted by the lead agencies to generate automated reports for the state to manage the initiative. Despite their limits, according to state officials, these management reports contain more information about program performance than was previously available. Public agencies are approaching the development of their information systems in a variety of ways that reflect the complexity of new systems and in-house expertise. One approach is to purchase a custom-designed system. For the TrueCare Partnership initiative in Hamilton County, Ohio, for example, one major component of the MCO’s contract is to develop a comprehensive management information system for the initiative’s public partners—the child welfare, mental health, and alcohol and drug agencies—that will integrate client-level data to meet both the public agencies’ and service providers’ information needs. Another approach is for in-house staff to develop the information system. This has been the case in the Lake and Sumter County managed care initiative in Florida, where the lead agency’s information system personnel have developed a database to track all service and placement data regarding program clients to measure outcome achievement. Yet another approach to developing an information system is to adapt an existing system. Boulder County’s IMPACT initiative is using a temporary system to track client and aggregate outcomes data until the county can purchase or custom build a more comprehensive, integrated management information system with anticipated project savings within the next year. For the short term, county staff have modified a system originally designed to track adolescents in a previous pilot project that also targeted adolescents in out-of-home care. Under managed care, public agencies are adjusting to new responsibilities while shifting some traditional functions to the private sector. These public agencies now focus their attention more on oversight and monitoring, and have reconfigured staff resources for contract monitoring and quality assurance purposes. Private agencies, assuming many of the responsibilities traditionally held by public agencies, are now faced with learning various state and federal requirements; preparing and monitoring other service provider contracts; and attracting, training, and retaining a larger workforce. Many of the public agencies implementing managed care initiatives have shifted most of the day-to-day casework responsibilities to private contractors, while developing the capacity and expertise to perform system oversight and monitoring activities. To accomplish this change, some locations reduced the number of public caseworkers and created new positions that reflect their new role. Gaining employees’ acceptance of these changes was a difficult task, according to about half the agency officials we surveyed. Comfortable with the traditional service-delivery system and concerned about the safety and well-being of their caseloads of children and families, some workers have resisted the loss of their control over service decisions for clients and must learn new skills and abilities to perform new quality assurance and contract management duties. In some instances, workers were not fully prepared to assume their new duties because public agencies had not taken steps to ensure staff support or to train until after implementation was under way. Former state child welfare caseworkers have assumed new positions, performing quality assurance activities and managing the new lead agency contract for the managed care initiative in Sarasota County, Florida. The state replaced 37.5 positions with 7 new positions—filled by public foster care, adoption, and protective services caseworkers whose jobs had been eliminated—to oversee the lead agency contract. Of these seven positions, three are quality assurance workers, responsible for monitoring the contractor’s case management activities to ensure that issues related to the child’s safety are adequately addressed; they accomplish this function by reviewing case files and provider-prepared court paperwork as well as attending case review meetings. A fourth position is for a contract manager, who monitors the lead agency’s compliance with the terms and conditions of the contract. Finally, three public employees perform tasks related to determining children’s eligibility for the federal foster care program and claiming federal reimbursement. The transition in public workers’ responsibilities has not been easy for either the public employees or provider staff, according to officials, as the public workers received no training for their new positions and focused most of their initial efforts on providing technical assistance and training to the providers’ staff about federal and state documentation and procedural requirements rather than on quality assurance. Kansas’ foster care managed care initiative has changed public employees’ approach to casework. State caseworkers are still responsible for a caseload of children and their families but have shifted their emphasis from day-to-day case management to intake, assessment, and child protection, and now function as service managers who monitor the services provided by the contractors. In addition, the state has altered the structure of its contract management staff. Now, some of the contract managers are Area Contract Specialists physically located in each of the state’s 12 area offices. As the state caseworker’s direct liaison with the lead agency, an Area Contract Specialist receives reports for management and oversight purposes and responds to questions about contract operations. Private service providers under contract in the managed care initiatives we reviewed have assumed many of the responsibilites formerly held by public child welfare agencies and, in some instances, have had to adjust to the rapid growth of staff that accompanied the expansion of providers’ duties. Understanding and monitoring existing federal and state requirements and managing provider networks are among their new duties. As case management functions have shifted to private providers, the providers have taken on new administrative tasks that enable states to continue claiming federal reimbursement for eligible activities now performed by contractors. Some contracts require the service provider to provide the information the public child welfare agency needs to file claims for federal title IV-E reimbursement of the costs associated with feeding and housing an eligible child in out-of-home care, as well as certain administrative costs related to that child’s placement, such as case management and licensing of foster homes. The tasks necessary to determine administrative costs can be very time consuming yet necessary where the public agency is financing the capitated payments with federal title IV-E dollars. Such is the case in Sarasota County’s managed care initiative in Florida. The state had previously established a method for determining administrative costs under its traditional, publicly operated service system, based on the assumption that caseworkers’ activities were reimbursable under title IV-E. However, the lead agency’s service-delivery approach includes a mixture of Medicaid-funded functions—such as clinical therapy—that title IV-E does not cover. To help determine eligible administrative costs for this initiative, the state requires the providers’ staff to perform a time study for 2 weeks each quarter, when individual workers record their various job activities in 15-minute increments. According to lead agency officials, this requirement is a new activity that has unexpectedly reduced staff time available for serving children and families. Another new administrative function for lead agencies is managing their subcontracts with network providers. Whether changing the nature of existing relationships or developing new ones, lead agencies—whose general experience has been in directly providing services to children and sometimes their families—must develop new capacities and expertise to ensure network providers are qualified; expand, contract, or reconfigure the network, when necessary; and monitor network providers’ performance and compliance with their contract requirements. Lead agencies’ relationships with network providers may become strained when, in the interest of cost efficiency or service quality, referral patterns to individual providers fluctuate or nonperformers are dropped from the network. As private contractors have assumed the lead agency role, the nature of their relationships with other community-based service providers has changed. Where previously providers contracted directly with the public agency, some now find themselves managing a network made up of former competitors. For example, for Kansas’ foster care managed care initiative, many of the community-based service providers were among the 16 bidders for the lead agency contracts, but only 3 of them won contracts. Many of the unsuccessful bidders became network subcontractors, and according to several providers’ staff, the stress of the competitive process left some network providers resentful of the lead agency’s new oversight role. Conversely, where there may have been fierce competition in the past for the public agency’s business, increased collaboration among providers may reduce the uncertainties of a competitive market. This has been the case in the managed care initiative in Sarasota County, Florida, where the major service providers no longer view themselves as competitors, according to providers’ staff, but are now collaborative partners in the self-formed Coalition that is the provider network. In yet another scenario, lead agencies have had to establish new relationships—sometimes straining existing relationships—when new providers were brought into the network. In order not to jeopardize the stability of existing placements when Massachusetts’ Commonworks program was implemented, the state, among other strategies, required the lead agencies to expand their provider network to include those group or residential care providers already serving youth who were transferred to the lead agency’s care. Expanding the network in this way meant purchasing fewer services from other network providers and possible financial jeopardy for some of them, according to providers’ staff, which strained their relationship with the lead agency. Once community-based service providers, regardless of their size, became lead agencies, they found that they needed to expand—sometimes very rapidly—to accommodate their new responsibilities and new caseloads of children and families. For many of these lead agencies, hiring and training a larger workforce amid various other start-up activities became a difficult task. For example, in Kansas’ foster care managed care initiative, the lead agencies’ caseloads more than doubled in a matter of months and they took on multiple areas of major new responsibilities. For example, they had to accept all referrals, develop and manage a provider network, manage cases of children that now also include their families, track and report outcomes, and consider the financial risk of the case rate. These responsibilities were combined with the basic business of expansion—hiring, training, and acquiring space and equipment. In addition, lead agencies had difficulty recruiting and retaining new workers because of tremendous competition for social workers when the state kept most of its social work staff and, therefore, did not provide a pool of former state workers that lead agencies had expected to choose from. While only a few locations are beginning to collect data and report results, preliminary results indicate some cost savings and improvements in the quality of care from the implementation of managed care. Public agency officials responsible for the ongoing initiatives are encouraged by improvements in the amount of services provided, overall service availability, and increased public support for children in the child welfare system. Some initiatives report cost savings resulting from the managed care entity’s success in reducing or averting the need to place children in the most costly out-of-home settings, such as residential care. The public agencies involved attribute this success in part to the better coordination of services that match client needs. For example, in Wisconsin’s Wraparound Milwaukee program in Milwaukee County, the publicly operated HMO has reduced the number of children in residential treatment and, as a result, costs are almost 40 percent less per child than under the previous system. Moreover, the program’s Mobile Crisis Team’s gatekeeping functions and development of treatment plans have resulted in a 55-percent reduction in inpatient hospital days as well as nearly 200 fewer children in need of residential care between 1994 and 1997. Furthermore, reinvestment of moneys saved from reducing the use of residential treatment has enabled the project to serve 44 percent more children with the same moneys. Child welfare officials are encouraged by other service-delivery improvements as well. Most agency officials we surveyed believe children and families are receiving more services that better match their needs under managed care. For example, in Sarasota County’s managed care initiative in Florida, the lead agency subcontracts with service providers whose caseworkers are seeing clients more frequently and providing more intensive services in the family’s home than public workers did prior to the managed care initiative. Managed care has also improved services by making them available to those who would otherwise not receive the services they need. Kansas’ foster care initiative, for example, includes one lead agency with a provider network dedicated to serving children and families in the extremely rural, westernmost part of the state. Historically, these children were hard to serve because of their remote location, but they now have available to them a provider network offering an array of services. Other initiatives also report improvements that have resulted in children and families achieving permanency goals more quickly under managed care, but they have yet to document reduced costs. For example, in Illinois’ Performance Contracting initiative in Cook County, private foster care agencies are more aggressively moving children toward permanency by providing services, such as aftercare and counseling, that enable children to rejoin their families, be adopted, or live with subsidized guardians. After 3 months of operation, the initiative has yet to realize any cost savings because of the additional state dollars invested in services for foster care contractors to find children permanent homes. However, the state projects that almost two-thirds more children will be in permanent living arrangements at year’s end over the previous year’s total because providers are now more effectively managing their cases. In still other initiatives, early results are mixed. While private contractors have met performance standards in some areas, they have fallen short in others. Kansas’ foster care managed care initiative, for example, reported that, after the first 10 months of operation, its lead agencies successfully surpassed performance standards related to the quality of children’s care, such as ensuring that children are safe from maltreatment and in stable placements. However, they were less successful in meeting standards that could result in cost efficiencies, such as reuniting families in a timely manner. After its first year of operation, according to state officials, managed care had not yet resulted in improvements in the rate that children leave foster care for more permanent living arrangements, or yielded cost savings. Finally, public officials believe managed care is increasing community awareness and support for the vulnerable population, particularly at-risk adolescents, that resides within their boundaries. For example, the managed care initiative in Boulder County, Colorado, targets adolescents in or at risk of residential treatment whose likely permanency goal is to live in the community and not with their families. According to initiative officials, building community-based networks of care has increased community concern and involvement with these adolescents, which county officials believe will facilitate reintegration into the community. Child welfare agencies face growing caseloads of children and escalating costs. At the same time, they must ensure that these children remain safe and search for the most appropriate permanent living arrangements. In addition, program officials and policymakers alike have been frustrated with many of the characteristics of the current service delivery system that is intended to provide care for these children and their families. They observe a system that often keeps children in care longer than necessary, in part, because of fragmented services and few financial incentives to provide better services to move children out of care faster. Many public officials responsible for the care of these children are looking to managed care as a way to change how their state or locality approaches the financing and delivery of child welfare services. While there is no single managed care approach, in general, states and localities are (1) experimenting with capitated payments to transfer financial risk to providers and (2) managing children’s care through a single point of entry to a full array of services. Simultaneously, they are introducing quality assurance strategies to maintain a balance between the desire to control costs and to ensure service quality and children’s safety. In what has been a publicly managed system, however, new contractual arrangements are shifting financial and service coordination responsibilities to the private sector in some states. Where public agencies opt to retain these responsibilities, they are changing their approach to coordinating children’s care and purchasing needed services. Public agencies experimenting with managed care view it as a strategy that promotes flexibility in a fragmented service-delivery system while attempting to ensure accountability for controlling costs and improving service outcomes. Because they anticipate legislative and policy changes that may reduce child welfare budgets, public and private agency officials alike have felt a sense of urgency to proactively pursue or prepare themselves for system reform. Even so, implementing managed care is a dynamic process that will require time to evolve and evaluate its efficacy. To date, the application of managed care arrangements in the child welfare system is still in its infancy and remains largely untested. States and localities expect to continue refining their initiatives as service and cost data become available and evaluations assess the efficacy of managed care to improve service outcomes for children and families. As more children are served under managed care arrangements, however, three outstanding issues need resolution. First, public agencies need to address the cash flow problems associated with an approach that requires public agencies to provide prospective, capitated payments to service providers but receive reimbursement for the federal share of costs only after the delivery of services. Where there is greater funding flexibility—either through blended or pooled funding arrangements or federal waivers, for example—public agencies stand a better chance of reducing or eliminating the service access problems often associated with different eligibility requirements in categorical funding streams. Second, the need for good service and cost data is paramount if public agencies expect to set reasonable and appropriate contract rates and performance standards. Public agencies must continue to develop and adapt their management information systems in order to make additional changes or provide their managed care partners feedback that could further improve policies and procedures for serving children and families in an effective, yet cost-efficient, manner. Finally, public agencies must continue to develop and refine strategies to hold their private partners accountable for achieving desired outcomes and developing the capacity to continuously measure and report their progress toward meeting performance goals. Such efforts are necessary to enable public agencies to report to policymakers at all levels on the effectiveness of the new system in meeting the needs of children and families. We obtained comments on a draft of this report from HHS and state and county public child welfare officials responsible for the managed care initiatives in the four case study sites. HHS provided two general comments and additional technical information, which we incorporated in the report as appropriate. First, HHS acknowledged that the federal role in child welfare managed care has been limited. However, it said that ACF has initiated and participated in a number of activities, such as internal training sessions and national conferences, that provided information about managed care concepts. Second, HHS noted that states’ Statewide Automated Child Welfare Information Systems (SACWIS) should provide data states need to implement managed care. We agree that SACWIS could provide some necessary information; however, most states are still developing their systems, and SACWIS’ overall usefulness in managed care is unknown. Responsible child welfare officials from the four case study sites generally agreed with the report’s findings and provided additional technical information about their child welfare managed care initiatives, which we incorporated in the report as appropriate.
Pursuant to a congressional request, GAO reviewed the states' efforts to implement managed care into their child welfare systems, focusing on determining the: (1) extent to which public agencies are using managed care to provide child welfare services; (2) financial and service delivery arrangements being used under a managed care approach; and (3) challenges child welfare agencies face as they develop and implement managed care, and the results of such efforts to date. GAO noted that: (1) nationwide, public child welfare agencies have implemented managed care projects or initiatives in 13 states, with new initiatives being planned or considered in more than 20 other states; (2) most of the ongoing initiatives involve foster children with the most complex and costly service needs; (3) currently, only about 4 percent of the nation's child welfare population is being served under managed care arrangements; (4) public agencies have contracted with experienced nonprofit, community-based providers in their service systems to implement managed care initiatives; (5) for-profit managed care companies have not had a major role in implementing managed care in child welfare; only a few jurisdictions are using for-profit companies to administer and provide child welfare services; (6) the majority of the ongoing child welfare managed care initiatives have established a capitated payment system; (7) lacking experience and uncertain about the feasibility of new fixed payments, some initiatives also use mechanisms to limit the financial risk that has been shifted to providers; (8) managed care initiatives require service providers to organize and coordinate a full array of services to ensure that appropriate and necessary services are available to children and their families; (9) most of the public agencies responsible for the initiatives have transferred case management functions to private entities; (10) the public sector, however, continues to play an active role at strategic points throughout the service-delivery process; (11) to ensure that providers' cost-controlling strategies do not jeopardize service quality or access to care, public agencies use various quality assurance techniques to hold service providers accountable for outcomes; (12) as more public child welfare agencies move toward managed care, public officials and their private contractors face several challenges; (13) as they develop and implement a capitated payment method, agencies need to find ways to maintain adequate cash flow; (14) agencies face the difficult tasks of developing sound management information systems; (15) both public and private agencies face new responsibilities as some traditionally public functions shift to the private sector and new roles emerge; (16) these changes may require these agencies to develop new procedures for case management and program administration and to provide additional training for both public and private employees; and (17) despite these challenges, public officials are encouraged by some positive, though limited, early results from managed care initiatives.
Community policing is generally defined as a shift in police efforts from a solely reactive response to crime to also proactively working with residents to prevent crime. Citizens, police departments, and other agencies are to work together to identify problems and apply appropriate problem-solving strategies. The practice of community policing began to emerge in the late 1970s. The Department of Justice (DOJ) has supported community policing efforts through various implementation and research grants for about the last 15 years. The Public Safety Partnership and Community Policing Act of 1994 (Community Policing Act)—Title I of the Violent Crime Control and Law Enforcement Act of 1994—authorizes DOJ to make grants for the hiring or rehiring of law enforcement officers to participate in community policing. In addition, the Community Policing Act authorizes DOJ to award grants for the purchase of equipment, technology, and support systems if the expenditures would result in an increase in the number of officers deployed in community-oriented policing. It also authorizes grants for other programs such as providing specialized training to enhance skills needed to work in partnership with members of the community. The purposes of the grants are to increase police presence, expand and improve cooperative efforts between law enforcement agencies and members of the community to address crime and disorder problems, and otherwise enhance public safety. The Community Policing Act authorizes $8.8 billion in grants over a 6-year period to states, local governments, Indian tribal governments, other public and private entities, and multijurisdictional or regional consortia. Fiscal year 1995 appropriated funds for the Community Policing Act totaled $1.3 billion. The President’s fiscal year 1996 budget requests about $1.9 billion for public safety and community policing grants. DOJ has used three programs to date—COPS: Phase I, COPS FAST, and COPS AHEAD—as part of its efforts to increase by 100,000 the number of sworn law enforcement officers over current levels by providing community policing grants. COPS: Phase I was open only to jurisdictions not funded due to a scarcity of funds under DOJ’s 1993-1994 Police Hiring Supplement Program (PHSP). COPS FAST is open to state, local, and other public enforcement agencies, Indian tribal governments, other public and private entities, and multijurisdictional or regional consortia that employ sworn law enforcement officers and that serve populations under 50,000. COPS AHEAD is open to those agencies serving populations of 50,000 or more. DOJ community policing guidelines provide that jurisdictions that had received COPS: Phase I funding were also eligible to receive additional funding under COPS AHEAD if the combined hiring under both programs did not exceed 3 percent of the actual October 1, 1994, total police force level.In addition, an agency that received funding under COPS: Phase I is eligible to receive additional funding under COPS FAST. DOJ has also announced additional programs. The guidelines also stipulate that federal grant funds awarded under the COPS FAST and COPS AHEAD programs cannot exceed 75 percent of the total salary and benefits of each officer up to a maximum of $75,000 per officer for a 3-year period. Grantees are required to provide at least 25 percent of officer costs and commit to retaining new officers after the grant expires. The application and selection processes varied somewhat between the COPS: Phase I program and the COPS FAST and COPS AHEAD programs because they were administered differently by separate offices within DOJ. BJA administered the application and selection of COPS: Phase I awards. The Attorney General created a separate office, the COPS Office, to administer the Community Policing Act grants. This office designed the application and selection processes for the COPS FAST and COPS AHEAD grants. The COPS Office is to monitor the grants awarded under all three programs. The Community Policing Act requires that each application (1) include, among other things, a long-term community policing strategy and a detailed implementation plan; (2) demonstrate a specific public safety need; and (3) explain the applicant’s inability to meet its public safety needs without federal assistance. The act makes special provisions for applications of local government or law enforcement agencies in jurisdictions with populations of less than 50,000 and for nonpolice hiring grants of less than $1 million by allowing the Attorney General to waive one or more of the grant application requirements and to facilitate the submission, processing, and approval of these applications. The difference in the application process between COPS: Phase I and COPS FAST and COPS AHEAD grants is the stage at which jurisdictions could begin recruiting and hiring additional officers. In the traditional grant process used for COPS: Phase I, jurisdictions submitted a detailed application to BJA for review and waited for final grant approval and award before beginning officer recruitment, hiring, and training. For the COPS FAST and COPS AHEAD grant programs, the COPS Office implemented a two-step application process that allowed jurisdictions to recruit, hire, and train officers while final grant applications were being reviewed. In response to a suggestion from the U.S. Conference of Mayors to expedite the grant application process for the COPS FAST and COPS AHEAD programs, the COPS Office designed a two-step application process to try to get new officers on the street months earlier than they would be under traditional grant award processes. First, for COPS AHEAD, the COPS Office used a one-page initial application to determine the number of officers jurisdictions could recruit and train. Approved jurisdictions were notified of proposed funding levels, cautioned that the funding was tentative, and warned that if the subsequent application was not approved, the COPS Office would not be held liable for officers hired. In COPS FAST, grant decisions were made based upon one-page applications. Second, the selected jurisdictions in both programs were to submit additional information to the COPS Office prior to issuance of formal awards. COPS AHEAD agencies were asked to submit detailed applications, while COPS FAST supplied brief budget and community policing information. The type of information and amount of detail required in this second application differed between COPS FAST and COPS AHEAD programs. COPS FAST applicants were allowed to provide less detailed information because the Attorney General waived certain requirements for communities serving under 50,000 residents. BJA awarded the COPS: Phase I grants based primarily on public safety need, while the COPS Office used commitment to community policing as the primary eligibility criterion for the COPS FAST and COPS AHEAD grants. COPS: Phase I grantees were competitively selected on the basis of the following five criteria used for PHSP applicants: (1) public safety need (40 percent), (2) community policing strategy (30 percent), (3) implementation plan (10 percent), (4) continuation and retention plan (10 percent), and (5) additional resource commitments (10 percent). The eligible jurisdictions for COPS: Phase I were those 2,507 jurisdictions that applied for the 1993-1994 PHSP but did not receive funding. BJA considered applications from both traditional law enforcement jurisdictions—such as municipal, county, and state police—and special law enforcement jurisdictions—such as airports, parks, and transit authorities. A BJA official said that most of the COPS: Phase I applicants demonstrating a high or moderate need based on the above five factors received funding. In addition, 16 jurisdictions received waivers of the local match requirement after demonstrating extraordinary economic hardships. The Assistant Director for Grants Administration said the intent of the COPS Office was to award COPS FAST and COPS AHEAD grants to as many applicant jurisdictions as funds allowed. However, after receiving more applications than it had expected—about 8,000 of the approximately 15,000 law enforcement jurisdictions applied—the COPS Office decided to consider in COPS FAST and COPS AHEAD only applications from traditional law enforcement jurisdictions. Jurisdictions with satisfactory COPS FAST and COPS AHEAD applications were approved for funds based on the number of officers on board on October 1, 1994. About 92 percent of jurisdictions that applied for a COPS grant received initial award approval. COPS Office staff said that if an application was incomplete, a COPS Office grant adviser contacted a local official for further information. In some cases, jurisdictions were referred for technical assistance to help them plan and implement a community policing strategy. On July 1, 1995, the COPS Office and the Community Policing Consortium entered into a cooperative agreement for the provision of certain training and technical assistance services. Table 1 shows the authorized hiring scale for approved jurisdictions. Table 2 summarizes information about the grant application and selection process for the three COPS programs. The Attorney General established the COPS Office to administer all Community Policing Act grants, including monitoring and evaluation to assess the financial and programmatic impact of the grants. Grantees are required to submit progress and accounting reports and are to be contacted periodically by telephone. Some of the financial monitoring is to be done by DOJ’s Office of Justice Programs (OJP). An intra-agency agreement between the COPS Office and OJP allows OJP to provide certain accounting and financial monitoring to track grantee compliance with audit requirements, as well as prepare financial status reports. According to the Assistant Director for Grants Administration, the frequency and extent of evaluation to assess a jurisdiction’s grant implementation process will depend on the amount of the grant award, with the level of scrutiny increasing for larger awards. A COPS FAST jurisdiction, for example, which received a grant award for only one law enforcement officer—about 6,200 jurisdictions—is to receive a minimum of telephone contacts and have its periodic progress reports reviewed. A COPS AHEAD jurisdiction, however, which may have received funding for a large number of officers, should expect site visits, frequent telephone contacts, and close review of its community policing efforts through its periodic progress reports. COPS Office staff said that each jurisdiction is to complete periodic progress reports that will outline information on each officer hired and the specific activities and achievements of its community policing program. The COPS Office is to conduct evaluations to review how the jurisdiction interacts with the community, what kind of training is provided to officers and residents, and what specific strategies are used to prevent crime. The COPS Office is to select a sample of jurisdictions for continuous impact evaluations. The Policy Support and Evaluation Unit within the COPS Office is to conduct these evaluations. DOJ’s National Institute of Justice is also expected to conduct impact evaluations. Impact evaluations are to be conducted on fewer sites than the process evaluations and are to assess how the quality of life in the community has been affected by community policing efforts. The COPS Office is to examine, for example, crime and arrest data, victimization surveys, and citizen surveys to evaluate the impact of the grants. The periodic progress reports are also to be used to evaluate the impact of the grants. We estimated that about 42 percent of all law enforcement jurisdictions applied for a COPS FAST or COPS AHEAD grant. Jurisdictions eligible for a COPS AHEAD grant were much more likely to apply than were jurisdictions eligible for a COPS FAST grant. About 81 percent of the jurisdictions eligible for the COPS AHEAD program applied; about 49 percent of those eligible for a COPS FAST grant applied. However, regardless of the program, generally, the higher the crime rate, the more likely a jurisdiction was to apply for a grant. Table 3 shows the application rates for law enforcement jurisdictions by program eligibility and number of crimes reported per 1,000 population served. Crimes reported in the 1993 UCR included violent crimes of murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault and the property crimes of burglary, larceny-theft, and motor vehicle theft. As previously mentioned, an estimated 92 percent of all jurisdictions that applied for a COPS FAST or COPS AHEAD grant received one. The eligibility criteria for these grant programs included the jurisdiction’s commitment to community policing, the type of law enforcement jurisdiction, population, and number of sworn officers on the force. Overall, jurisdictions with populations of less than 50,000 were more likely to receive a COPS grant than were larger jurisdictions. Approximately 93 percent of COPS FAST applicants were accepted, while about 74 percent of the jurisdictions applying for a COPS AHEAD grant were accepted (see table 4). We found no relation between crime rates and whether an applicant jurisdiction was awarded the grant. Table 4 shows the disposition of applications for jurisdictions by program and crime rate. At the Committee’s request, we conducted telephone interviews with a random sample of 289 nonapplicant jurisdictions to find out why they did not apply for a grant. From our telephone survey, we estimated that 62 percent (plus or minus 11 percent) of the nonapplicant jurisdictions did not apply for a COPS FAST or COPS AHEAD grant due to cost-related factors. For the following question: “How important was each of the following reasons in your agency’s decision to not apply for a COPS program grant?”, we asked respondents to assign one of five levels of importance to each of five reasons. Next, we asked respondents to indicate the most important reason for not applying. We also allowed respondents to identify any other reasons that affected their jurisdiction’s decision. The estimated 62 percent included about 40 percent (plus or minus 12 percent) of nonapplicants who said uncertainty about the jurisdiction’s ability to meet the requirement for continued officer funding after the 3-year grant period was the most important reason for not applying. An additional 18 percent cited the 25-percent local match requirement as the most important reason in their decision; 4 percent cited other financial reasons. An additional 8 percent said the jurisdiction did not apply either because of a lack of information on the grants or because of problems meeting the application deadlines, 3 percent mentioned local political or management decisions, and 4 percent cited various other reasons. In addition, some respondents cited the inadequacy of the $25,000 per year per officer grant to cover the full cost of new officers. According to DOJ’s Bureau of Justice Statistics, average starting salaries for entry level officers range from $18,710 to $26,560, with average operating expenditures per officer ranging from $31,500 to $63,400. Table 5 summarizes the results of the importance ratings. We reviewed a random sample of 207 COPS FAST approved applications and found that approximately 84 percent (plus or minus 5 percent) of these jurisdictions—serving populations of less than 50,000—cited property crimes most frequently among their top five ranked public safety issues (from the categories listed on the application form), with almost half of the jurisdictions ranking it as their first or second most important concern. In addition, we estimated that at least half the jurisdictions ranked the following public safety issues among their top five: domestic violence, alcohol-related crimes, drug crimes, vandalism, and violent crimes against persons. Table 6 shows the rank order of the public safety issues. On September 13, 1995, we received written comments from the Director of DOJ’s Office of Community Oriented Policing Services on a draft of this report. He said that the popularity of the COPS grant programs continues to expand and provided technical clarifications, which we incorporated where appropriate. He also provided some updated information on the progress of the programs since our audit work was completed. The Director’s written comments are reproduced in full in appendix II. We are sending copies of this report to other interested congressional committees and Members and to the Attorney General. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix III. Please call me on (202) 512-8777 if you have any questions concerning this report. As agreed with the Committee and Subcommittee, our objectives were to review various aspects of the COPS programs and describe the grant application, selection, and monitoring processes for COPS: Phase I, COPS FAST, and COPS AHEAD. In addition, for COPS FAST and COPS AHEAD we were to compare the crime rates for applicant and nonapplicant jurisdictions, determine why some jurisdictions chose not to apply for the grants, and determine the public safety issues of applicant jurisdictions. To describe the application, selection, and monitoring processes for the COPS programs, we interviewed officials from BJA and the COPS Office, including the Assistant Director for Grants Administration and the co-chiefs of the Grant Monitoring sections. In addition, we reviewed documents used in the grants process, including application forms, selection review forms, and draft monitoring guidelines. To determine and compare crime rates for COPS FAST and COPS AHEAD applicant and nonapplicant jurisdictions, we used application data provided by the COPS Office and UCR data for 1993, which lists all law enforcement jurisdictions that report crimes to the Federal Bureau of Investigation. UCR data contained information on population and numbers of reported crimes for jurisdictions. We merged the COPS Office’s listing of applicant jurisdictions with UCR data to identify nonapplicant jurisdictions. Next, we used UCR data to assign jurisdictions to population categories (less than 50,000 and 50,000 and over) and calculated the number of crimes reported per 1,000 population served. We grouped jurisdictions into COPS FAST and COPS AHEAD grant applicants and nonapplicants, and applicants into those approved and those not approved. To determine why jurisdictions chose not to apply for COPS grants, we surveyed a stratified, random sample of nonapplicants by telephone. We limited our survey population to city, local, county, and tribal police. These types of law enforcement agencies account for 91 percent of all jurisdictions. Using UCR population data, we stratified the population into three size groups and selected random samples from each: 0 - <10,000 population (71 from 6,094 jurisdictions); 10,000 - <50,000 (143 from 1,375 jurisdictions); and 50,000 and over (all 170 jurisdictions). We completed 334, or 87 percent, of our planned contacts with the sample of 384 jurisdictions. Fifty contacts were not completed for various reasons, including difficulty in reaching the appropriate respondent and unwillingness of some jurisdictions to respond. Of the 334 contacts made, we completed interviews with respondents in 289 jurisdictions. We found that 45 did not belong in our study population because they had applied for a COPS grant. Mostly, these jurisdictions were either covered under another jurisdiction’s application (and the application was identified by the other jurisdiction) or not listed on the COPS program’s applicant file as an individual applicant. All survey results have been weighted to represent the total population. To determine the public safety issues of applicant jurisdictions, we reviewed a random sample of 207 of the 3,258 COPS FAST applications that had been received and graded or approved for funding as of April 25, 1995. According to the Assistant Director for Grants Administration, this represents about half of the 6,656 jurisdictions that were given preliminary funding approval. Applications were stored at various locations in the COPS Office. To obtain our sample, we used a random starting point and then took every 16th application from the files. Jurisdictions applying for the COPS FAST program were required to rank order their public safety issues from a list of 16 issues. We did not review COPS AHEAD applications because their statements of public safety needs were included as part of a narrative description of their community policing program, which could be up to 18 pages. It would have been difficult, if not impossible, to identify or infer the relative importance of the public safety concerns from such narrative sources. Our work was done primarily in Washington, D.C., from April to August 1995 in accordance with generally accepted government auditing standards. The Office of Community Oriented Policing Services provided written comments on a draft of this report. The written comments are reproduced in appendix II. Ann H. Finley, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed various aspects of the Community Oriented Policing Services (COPS) Program, focusing on the reasons some jurisdictions did not apply for federal community policing grants. GAO found that: (1) jurisdictions with higher crime rates were more likely to apply for COPS grants; (2) nearly 92 percent of the jurisdictions applying for grants received initial approval; (3) some jurisdictions were uncertain about being able to continue officer funding after their grant expired and about their ability to provide the required 25 percent match; (4) the jurisdictions that did not apply for COPS grants cited cost-related factors as their major concern; and (5) the most frequent crimes reported by COPS Funding Accelerated for Smaller Towns (FAST) applicants were property crime and domestic violence.
IHS oversees the CHS program through 12 area offices. Federal and tribal CHS programs in each of these areas pay for services from external providers if services are not available directly through IHS-funded facilities, if patients meet certain requirements, and if funds are available. IHS conducts an annual assessment to estimate CHS program need. To perform its needs assessment, IHS requests data from area offices and individual CHS programs on health care services they were unable to fund. IHS manages the CHS program through a decentralized system of 12 area offices, which oversee individual CHS programs in 35 states where many American Indian and Alaska Native communities are located. (See fig. 1 for a map of the counties included in the 12 areas. Residence in these counties is generally a requirement for obtaining contract health services.) IHS headquarters is responsible for overseeing the CHS program. Among other things, it sets program policy and distributes CHS program funds to the 12 area offices. The 12 area offices then distribute funds to CHS programs within their respective areas, monitor the CHS programs, establish procedures within the policies set by IHS, and provide programs with guidance and technical assistance. About 46 percent of CHS funds are distributed to federal CHS programs and the other 54 percent to tribal CHS programs. Tribal CHS programs must meet the same statutory and regulatory requirements as federal CHS programs, but they are not generally subject to the same policies, procedures, and reporting requirements established for federal CHS programs. Federal and tribal CHS programs pay for services from external providers if the services are not available at IHS-funded facilities. The services purchased include hospital, specialty physician, outpatient, laboratory, dental, radiology, pharmacy, and transportation services. While programs may have agreements or contracts with providers, they are not required for a provider to be paid. For example, a CHS program may have a contract with a nearby hospital or specialty providers, such as an orthopedic practice, to provide services to American Indians and Alaska Natives served by the CHS program. However, in the event of an emergency, patients have the option of visiting the nearest available provider, regardless of whether that provider has any prior relationship with the CHS program. Patients must meet certain eligibility, administrative, and medical priority requirements to have their services paid for by the CHS program. (See table 1.) To be eligible to receive services through the CHS program, patients must be members of federally recognized tribes and live in specific areas. In addition, patients must meet specific administrative requirements. For example, if there are other health care resources available to a patient, such as Medicaid and Medicare, these resources must pay for services before the CHS program because the CHS program is generally the payer of last resort. If a patient has met these requirements, a program committee (often including medical staff) that is part of the local CHS program evaluates the medical necessity of the service. IHS has established four broad medical priority levels of health care services eligible for payment and a fifth for excluded services that cannot be paid for with CHS program funds. Each area office is required to establish priorities that are consistent with these medical priority levels and are adapted to the specific needs of the CHS programs in their area. Federal CHS programs must assign a priority level to services based on the priority system established by their area office. Funds permitting, federal CHS programs first pay for the highest priority services (priority level I: emergent/acutely urgent care), and then for all or only some of the lower priority services they fund. Tribal CHS programs must use medical priorities when making funding decisions, but unlike federal CHS programs, they may develop a system that differs from the set of priorities established by IHS. There are two primary paths through which patients may have their care paid for by a federal CHS program. The subsequent sections generally describe these two paths, which IHS officials told us federal CHS programs are expected to follow. First, a patient may obtain a referral from a provider at an IHS-funded health care facility to receive services from an external provider, such as a hospital or office-based physician. That referral is submitted to the CHS program for review. If the patient meets the requirements and the CHS program has funding available, the services in the referral are approved by the CHS program and a purchase order is issued to the external provider and sent to IHS’s fiscal intermediary. Once the patient receives the services from the external provider, that provider obtains payment for the services in the approved referral by sending a claim to IHS’s fiscal intermediary. Second, in the case of an emergency, the patient may seek care from an external provider without first obtaining a referral. Once that care is provided, the external provider must send the patient’s medical records and a claim for payment to the CHS program. At that time, the CHS program will determine if the patient meets the necessary program requirements and CHS funding is available for a purchase order to be issued and sent to the fiscal intermediary. As in the earlier instance, the provider obtains payment by submitting a claim to IHS’s fiscal intermediary. Patients seeking to have their care paid for by tribal CHS programs follow similar pathways, but these programs have certain flexibilities. For example, while some tribal CHS programs also contract with IHS’s fiscal intermediary to pay claims, they may also utilize other arrangements. (See fig. 2 for an overview of these two paths for a patient to access the CHS program.) Within either of these pathways, if the CHS program determines that the patient’s service does not meet the necessary requirements or funding is not available, it denies CHS funding. It may also defer funding a service. The CHS program may issue a deferral when CHS funds are not available for a service but the patient has otherwise met the eligibility and administrative requirements. IHS conducts an annual assessment to estimate the CHS program’s unmet need, which helps inform its budget request for the CHS program. To gather information for its needs assessment, IHS headquarters sends an annual request for information to each of the 12 area offices asking them to report information from the federal and tribal CHS programs in their respective areas. The annual request contains a template that asks each area office to provide, among other things, summary counts of deferrals and denials that were recorded by the CHS programs in their areas. For example, each area office is asked to provide areawide totals of the number of new deferrals that remained unfunded at the end of the fiscal year. They are also to provide summary counts of denials that have been issued for each of eight categories of denial reasons, regardless of the type of service denied. The eight categories generally correspond to the CHS program’s eligibility, administrative, and medical priority requirements. Although funding for a service may be denied for multiple reasons, programs are required to categorize each denial by a single primary reason. IHS uses the data recorded by the individual CHS programs and collected by the area offices to develop an estimate of the CHS program’s unmet need. (See fig. 3.) To develop its estimate, IHS headquarters adds the total number of reported deferrals and the total number of denials reported in one of eight IHS-defined denial categories: “care not within medical priority.” According to IHS, CHS programs are only to record a denial as “care not within medical priority” to indicate that the patient met eligibility and administrative requirements, but the care requested was not within one of the medical priority levels for which funding was available. For example, a program that determines it only has funding available to pay for care designated as priority level I may deny a request to pay for care designated as priority level II because the care requested was not within the medical priority for which funding was available. Although IHS requests that the area offices report data from both federal and tribal CHS programs, it cannot require tribal CHS programs to report these data. Therefore, IHS officials told us they make an assumption in their assessment of program need that most tribal CHS programs do not report deferral and denial counts to the area offices. Because tribal programs receive about half of IHS’s CHS funding, and because IHS believes that tribal CHS programs’ experiences are similar to federal programs, IHS takes the data reported by area offices and multiplies them by two to calculate an estimate of the total number of deferrals and denials for the entire CHS program. IHS then multiplies this count of deferrals and denials by an estimated average cost per claim (calculated using a weighted average of the costs for inpatient and outpatient paid CHS claims) to develop an estimate of the funds needed for the CHS program. To this estimate, IHS adds data from the CHS program’s Catastrophic Health Emergency Fund (CHEF), a fund that IHS headquarters administers to reimburse CHS programs for their expenses from high-cost medical cases. Specifically, IHS adds the total billed charges from services for which CHS programs sought reimbursement from IHS headquarters through CHEF, but that CHEF was unable to fund. (See app. II for further discussion of CHEF.) Due to deficiencies in IHS’s oversight of data collection, the unfunded services data on deferrals and denials that IHS used to estimate program need are incomplete and inconsistent. IHS does not have complete deferral and denial data from all federal and tribal CHS programs to estimate CHS program need. While IHS headquarters told us that area offices submit a report on unfunded services from their federal and tribal CHS programs in response to the annual request, these reports did not include data from all federal or tribal CHS programs. Of the 66 federal CHS programs that responded to our survey, 5 reported that they did not submit any deferral or denial data to their area offices in response to IHS’s annual request in fiscal year 2009. IHS officials acknowledged that they did not follow up with federal CHS programs to ensure they submitted data. Although not required, tribal programs may choose to submit deferral and denial data to IHS and the agency asks the area offices to include tribal data in their annual reports. Of the 103 tribal CHS programs that responded to our survey, 30 indicated that they collected data on unfunded services and submitted these data to their area offices in response to IHS’s annual request in fiscal year 2009. IHS officials acknowledged that the agency needed to provide more outreach and technical assistance to tribal programs to submit data in response to IHS’s annual request. For example, they told us that an area office used such efforts during one fiscal year and was successful at eliciting data submissions from more tribes. By not encouraging the reporting of unfunded services data from all programs, IHS’s data collection activities are not consistent with the Standards for Internal Control in the Federal Government, which state that an organization’s management should provide reasonable assurance of the reliability of its reporting data for the agency to achieve its goals—in this instance, IHS’s goal to appropriately determine CHS program need. As we have also previously reported, the ability to generate reliable estimates is a critical function for agency management; having accurate data contributes to the reliability of the estimate. Second, IHS’s report template was not designed to allow the agency to collect complete information for estimating need because it did not distinguish between the federal and tribal CHS programs that did report data. Because IHS headquarters only requested areawide totals in its report template, IHS officials were unable to determine which CHS programs reported data from the area reports that were submitted. IHS officials told us they did not know how many federal or tribal CHS programs reported data, although they estimated that most of the data were from federal programs and only a small percentage were from tribal programs. To account for the lack of complete data from tribal programs, when conducting its needs assessment, IHS doubled the count of unfunded services it received from the area offices. However, this means that any data received from tribal programs were being doubled along with the federal data, contributing to an unreliable estimate of need. For example, in fiscal year 2009, one area office reported a total of 4,858 denials for “care not within medical priority,” which IHS doubled to account for the lack of complete data from tribal programs. However, we determined that 2,901 of the 4,858 denials were reported by tribal CHS programs. IHS officials told us that they do not distinguish federal and tribal CHS program data in their annual data reporting template because they believe the data they receive from tribal CHS programs are so limited that they would not significantly affect their estimate of need. Additionally, CHS programs inconsistently categorized a specific type of denial reason that is reported to IHS headquarters and used in its estimate of CHS program need because IHS has not provided guidance on this issue. CHS programs can deny care for multiple reasons, but IHS requires CHS programs to select a primary reason for denial. Specifically, IHS officials told us that IHS only counted those denials with a primary reason identified as “care not within medical priority” in its needs assessment because these services were denied solely if funds were not available. However, neither IHS headquarters nor the area offices had provided guidance to federal CHS programs on how to select this primary reason for denial. Consequently, we found some area office and CHS program officials defined this type of denial reason in different ways. Officials from four area offices told us that they defined denials for “care not within medical priority” as also including services denied for administrative reasons or services that are excluded even if CHS funds are available such as cosmetic or experimental procedures. In our survey of the 66 federal CHS programs, 51 reported that they would apply this denial category if the care requested was an excluded service. One CHS program reported not knowing that a primary reason for denial existed. Because this category of denial was the only denial reason IHS used in its estimate, inconsistencies in how this denial reason was categorized by CHS programs have directly affected IHS’s estimate of need. Some CHS programs also inconsistently recorded deferrals because IHS has not provided guidance about how it uses deferral data in its needs assessment. IHS officials told us that both deferral and denial data were used in IHS’s needs assessment. However, officials from one area office reported that their understanding was that only denials were counted in IHS’s needs assessment. In our survey of the 66 federal CHS programs, we found that 15 reported recording a decision to defer a service as both a deferral and a denial (making the count of denials inaccurate). Because IHS uses both deferrals and denials to estimate need, the inconsistent recording of deferrals would directly affect IHS’s estimate of need. IHS did not have a written policy documenting how the deferral and denial data it requests annually from the CHS programs would be used in its needs assessment and IHS officials told us they had not provided training to area offices or CHS programs on how to complete the annual request. However, this lack of guidance is inconsistent with the Standards for Internal Control in the Federal Government, which notes that formally documented policies and procedures provide guidance that, among other things, helps to ensure that staff perform activities consistently across an agency. IHS officials have also identified weaknesses in the deferral and denial data that they used to estimate CHS program need. For example, they told us the data did not capture complete information on needed services that were not requested of the CHS programs because patients may have been discouraged from presenting for care or providers may have chosen not to write referrals if they believed funds were not available to pay for services. IHS officials also told us that these data did not capture data on the extent to which tribes supplemented their CHS funds with tribal funds to avoid deferring or denying health care services. IHS has initiated steps to examine these weaknesses in its current data and explore other sources of data to estimate CHS program need. In November 2010, IHS convened an Unmet Needs Data Subcommittee as part of its Director’s Workgroup on Improving the CHS Program. The subcommittee was comprised of representatives from federal and tribal CHS programs. In a January 2011 report, the subcommittee noted that IHS’s deferral and denial data had inaccuracies. While the report noted that reliably captured deferral and denial data on all patients would present the strongest evidence of need, it acknowledged that these data were incompletely and inconsistently reported by CHS programs, and recognized that this undermined the reliability of the estimated need IHS reports to the Committees on Appropriations annually in its budget justification. In February 2011, the subcommittee presented options for improving IHS’s assessment of CHS program need to the Director’s Workgroup. Based on these options, the Director’s Workgroup agreed that the subcommittee should explore a new methodology for estimating CHS program funding needs that relies on different sources of data. Rather than relying on deferral and denial data, the new method would use IHS’s existing Federal Disparity Index (FDI). IHS calculates the FDI to estimate the disparity between its overall health care funding and the amount of funding needed to provide care to American Indians and Alaska Natives at a level comparable to the care provided by the Federal Employees Health Benefits Program (FEHBP), which is a nationwide health insurance program available to federal employees. With this new method, IHS would adapt the FDI to calculate an estimate of need for each CHS program. Specifically, each IHS-funded facility would use a standardized tool to (1) calculate what proportion of services is paid for by its CHS program because these services are not available on-site at an IHS-funded facility, (2) estimate the level of CHS funding that would be needed to provide comparable services to those covered by FEHBP, and (3) compare that estimated level of funding to the program’s actual level of funding. As a first step, each IHS area was to pilot the methodology on- site at two of its IHS-funded facilities. Once the pilots were completed, IHS officials told us the Workgroup planned to review the results of these pilots and issue a final report that contains a recommendation for the Director of IHS to consider for approval. As of September 2011, IHS officials said that they had finished the on-site pilots, but they were still making decisions about how to best adapt the FDI method to estimate CHS program need and they did not have a formal agency approved plan for implementing it. Officials indicated that they expected the Workgroup to issue a final report to the Director for approval by the end of calendar year 2011. In addition to the proposed new method for estimating need, the Director’s Workgroup agreed that actions be taken to improve the agency’s collection of deferral and denial data that is currently used for that purpose. However, as of September 2011, IHS officials told us that the agency had not determined whether it would make improvements to the collection of deferral and denial data because it had not determined how such data would be used if the FDI method is adopted. But, officials said that they still see merit in using deferral and denial data to estimate CHS program need and, therefore, IHS may supplement the estimates from the FDI method with deferral and denial data from CHS programs that agency officials believe collect accurate data. IHS officials indicated that, until this decision is made, the agency will continue to collect deferral and denial data from the area offices through its annual request. Most federal and tribal CHS programs reported that they did not have CHS funds available to pay for all services for patients who otherwise met eligibility and administrative requirements in fiscal year 2009. In addition, some federal CHS programs reported using problematic funds management practices. Of the 66 federal CHS programs that responded to our survey, 60 reported that they did not have CHS funds available to pay for all services for patients who otherwise met eligibility and administrative requirements in fiscal year 2009. IHS officials told us that most CHS programs establish budgets as a way to help ensure that funds are available throughout the year. However, even with this budgeting, 11 of these 60 CHS programs reported that they depleted their funds before the end of the fiscal year. Officials from three CHS programs we spoke with said their programs experienced multiple high-cost cases in the fourth quarter that depleted their funds. An official from another CHS program noted that the program is located in a rural area and the closest specialty care providers are 3 hours away by car. Therefore, if emergency care is required, the patient must be transported by air, which the CHS official said is expensive. In our survey, each federal CHS program identified the three most common categories of services it deferred or denied in fiscal year 2009. The most commonly cited categories of services were dental services, orthopedic services, vision services, and diagnostic and imaging services. The 60 federal CHS programs that reported not having CHS funds available to pay for all services in fiscal year 2009 varied in the extent to which they had funds available to pay for services in each of the priority levels. Some programs described the circumstances that influenced the extent to which they had funds available to pay for services in fiscal year 2009. (See fig. 4.)  Thirty-nine of these programs reported having funds available to pay for all priority level I services (emergent/acutely urgent care) and some services in lower priority levels. Some of these CHS programs said that after purchasing all of their priority level I services, they had funds remaining at the end of the fiscal year and were able to use these funds to pay for lower priority services for patients whose services they had originally deferred or denied. For example, officials from one CHS program reported that in fiscal year 2009, they were able to use funds at the end of the fiscal year to provide eyeglasses to children and the elderly; a lower priority service that normally would not have been funded.  Ten of these programs reported having funds available to pay for all priority level I services, but no services in lower priority levels. Some of these CHS programs reported that they never fund services beyond priority level I because their funds are so limited. An official from one of these programs noted that if a patient’s case was originally deferred or denied because it was not a priority level I service but the patient’s condition became more severe, the case may later be reclassified as a priority level I and the services purchased.  Six of these programs reported having funds available to pay for some of their priority level I services and some services in lower priority levels. An official from one of these CHS programs told us that they strictly adhere to a weekly budget. For example, if they approved three high-cost cancer treatment cases one week, they may deny other priority level I cases because they do not have funds remaining to pay for these services. But, if funds in another week are sufficient to pay for all priority level I cases, they may also have funds available to pay for some lower priority services. An official from another of these CHS programs told us that staffing shortages over 2 years resulted in the program paying for services as the requests were received rather than funding them in order of medical priority. The official told us that, as a result, the CHS program paid for some priority level IV services, like durable medical equipment, even though they did not have funds available to pay for all of their priority level I services for the year.  Five of these programs reported depleting their CHS funds before the end of the fiscal year and reported that they did not have funds available to pay for all priority level I services. One of these programs reported depleting its funds for the fiscal year in the second quarter of fiscal year 2009, two programs reported depleting their funds in the third quarter, and two programs reported depleting their funds in the fourth quarter. Federal CHS programs we spoke with reported using a variety of strategies to help patients receive services outside of the CHS program in order to maximize the care that they could purchase. For example, strategies noted by some CHS programs included helping patients locate free or low-cost health care or negotiating reduced rates with providers on the patient’s behalf. Although CHS programs are required to identify alternate resources before approving a referral, some officials we spoke with said they have implemented additional measures to help enroll patients in alternate coverage, such as Medicare and Medicaid. For example, one CHS program reported hiring a benefits coordinator who is responsible for helping enroll people in alternate coverage. IHS’s CHS programs are not able to pay for services for all patients who meet program requirements because they must operate within the limited funding available. Whenever a program incurs costs for services, the program incurs legal obligations to make payments. IHS does not authorize programs to incur obligations in excess of their “allowances,” which are distributions of funds that IHS makes to programs from appropriations for contract health services., According to IHS officials, programs are expected to actively manage their funds in order to maximize the care that can be purchased, and defer or deny care when sufficient funds are not available. Officials from five federal CHS programs told us, however, that they approved services when funds were depleted for a fiscal year with the understanding that providers would not be paid until the next fiscal year. For example, one of these officials reported that at the beginning of fiscal year 2009, the program owed $2 million to providers for care provided in fiscal year 2008 for which funds had not been available. At least one of these officials believed that she was not authorized to deny care due to lack of funds. To help ensure compliance with the Antideficiency Act, which generally prohibits federal officers and employees from incurring obligations in excess of appropriations, apportionments, and certain administrative subdivisions of funds, IHS has promulgated a funds management policy. See 31 U.S.C. §§ 1341, 1514, 1517. The existing policy provides that, even if there is no violation of the Antideficiency Act, agency officials may be subject to administrative discipline should they incur obligations in excess of the funds distributed to them. See Indian Health Manual, Circular 95-19, Administrative Control of Funds Policy; Indian Health Manual, Circular 91-7, Contract Health Service Funds Control. IHS officials told us that the Indian Health Manual needs to be updated to reflect current procedures for the administrative subdivision of funds, among other things, but that the agency does not consider the over-obligation of allowances to be a violation of the Antideficiency Act unless it results in an over-obligation of the related allotment. The reports from these officials suggest significant weaknesses in funds management and violations of IHS policy creating the potential for violations of the Antideficiency Act. They also suggest significant inconsistencies in the administration of federal CHS programs. When asked about this issue, IHS officials told us that they were not aware that CHS programs had approved services without available funds, but acknowledged that there had been some confusion in the past regarding programs’ authority to deny care when funds were not available. They also noted that the agency guidance on funds management that is provided to CHS program staff is vague and needs to be updated and clarified. The officials told us that the agency plans to update and revise relevant IHS guidance, but had not developed a timeline for these revisions. The officials said that they have delegated responsibility to the area offices for issuing specific guidance to CHS programs, as well as conducting oversight regarding funds management and other issues. The officials, however, acknowledged that additional guidance and training from IHS headquarters for the CHS programs on funds management would be helpful. Of the 103 tribal CHS programs that responded to our survey, most reported they did not have CHS funds available to pay for all services for patients who otherwise met eligibility and administrative requirements, with 73 reporting that they depleted their CHS funds at some point during fiscal year 2009. In our survey, each tribal CHS program identified the three most common categories of services that were requested but not funded in fiscal year 2009. The most commonly cited categories of services that were requested but not funded were dental services, orthopedic services, prescription drugs, diagnostic and imaging services, and hospital services. Tribal CHS programs reported using a variety of strategies not available to federal CHS programs to expand access to care. Forty-six of the 103 tribal CHS programs that responded to our survey reported supplementing their CHS programs’ funding with tribal funds—funds earned from tribal businesses or enterprises. For example, one tribal CHS program we spoke with used the profits from its tribally funded medical and dental clinics, which served non-IHS patients on a fee-for- service basis, to supplement its CHS funding. Of the 46 programs that reported finding it necessary to supplement their CHS programs with tribal funds, 28 reported contributing as much as was needed each year, while the other 18 reported that their tribal contributions were limited by the availability of funds from year to year. In our survey, tribal CHS programs identified the three most common categories of services paid for with tribal funds in fiscal year 2009. The most commonly cited categories of services were prescription drugs, dental services, hospital services, and orthopedic services. Five tribal CHS programs we spoke with reported using tribal funds to expand access to contract health services to individuals living outside the designated CHS delivery area, or to pay for services CHS funding would not usually cover. Tribal CHS programs also reported supplementing their CHS funding by using reimbursements from third party payers to pay for CHS services, a strategy not available to federal CHS programs. Thirty-four of the 103 tribal CHS programs that responded to our survey reported using reimbursements for services provided at their IHS-funded facilities from third party payers such as Medicare, Medicaid, or private insurance to pay for additional services through their CHS programs. One tribal CHS program we spoke with reported that more than half of its budget relied on funds from third party reimbursements, although officials noted that even with this supplemental funding, they were still limited to funding priority level I services only. In addition, five tribal CHS programs we spoke with reported using strategies to expand access to care that reduced their reliance on CHS funds. For example, two programs we spoke with were able to directly enroll patients in a state-based insurance program for low-income individuals who did not qualify for Medicaid, and to pay the premiums using tribal funds. For uninsured CHS-eligible patients who are ineligible for government programs, one program reported using its IHS-allocated CHS funds to purchase private insurance coverage under a waiver from IHS. Enrolling eligible patients in alternate coverage reduced the reliance on CHS funds because the CHS program would only have to pay for services to the extent they are not covered by the alternate resources. Another program was able to achieve cost savings by contracting with a third party administrator to process its CHS claims, which allowed it to access a preferred provider network that provided care at discounted rates. Officials from another program reported bringing specialty providers, such as cardiologists and ear, nose, and throat specialists on- site at their facility to save money, compared to what it would cost to pay providers in the community for individual services. Most of the external providers who we interviewed reported challenges in determining patient eligibility for CHS payment of services, in obtaining CHS payment, and in receiving communications on CHS policies and procedures from IHS related to payment. Providers stated that these challenges contributed to patient and provider burdens. Thirteen of the 23 providers who we interviewed reported challenges in determining whether patients presenting for care without a CHS referral were eligible to have services paid by the CHS program. Fourteen providers also reported challenges obtaining timely payment from CHS programs. Lastly, 18 providers noted challenges receiving communications from IHS about CHS policies and procedures related to payment, including having had few, if any, formal meetings with CHS staff and a lack of training and guidance. Thirteen providers who we interviewed reported challenges determining whether patient services would be approved by the CHS program for payment. Providers interact with American Indian and Alaska Native patients if these patients bring a referral from an IHS-funded health care facility. In the case of an emergency, a patient may seek care without obtaining a prior referral. Thirteen providers said it was especially challenging to determine patient eligibility when patients presented for care without a CHS program referral. Six providers noted that for other payers with which they interact, they are able to electronically check a patient’s eligibility or covered services. However, IHS officials indicated that it is not possible for providers to check electronically whether the CHS program will pay for a service. Five providers indicated that, when possible, they attempted to contact the CHS programs in order to obtain information about a patient’s eligibility. However, those providers said they were generally not able to get in contact with CHS program staff. Moreover, even if a provider determined that a patient met some CHS program eligibility requirements, such as tribal membership, payment was still conditional on whether the CHS program reviewed the patient’s medical record and later determined that the emergency service met medical priority requirements and funds were available. Therefore, providers may not know if they will receive payment for services delivered to the patient until the claim they have submitted to the CHS program is reviewed. In the absence of a process to determine patient eligibility for the CHS program, 12 providers said they submit claims for payment to CHS programs for all patients who self-identified as being American Indian or Alaska Native or eligible for the CHS program. Fourteen providers said that when a patient presented for care with a CHS program referral, the likelihood that they would receive payment for the services delivered to the patient increased. For example, one provider stated that for the care delivered to American Indian and Alaska Native patients without a CHS program referral, about 80 percent of claims were denied; in comparison, about 20 percent of claims were denied when patients had a CHS referral. IHS officials said that denials may occur for a patient who has a referral if the patient presented for care at the external provider before the referral was approved by the CHS program committee. However, they also noted that there were situations in which a referral that had been approved by a CHS program committee could still be denied. For example, if a patient did not apply for alternate resources, such as Medicare and Medicaid, for which the patient was eligible or the provider did not bill other payers for which the patient was eligible, the claim may be denied for CHS payment. Additionally, although CHS programs are required to consider the availability of alternate resources when deciding whether to approve a referral, IHS officials acknowledged that programs may not always take this into consideration when making their decision. Providers reported a number of reasons for which they received denials for payment from CHS programs. While providers said that some of the denials they received were related to patient eligibility, such as a patient living outside of the CHS delivery area, which was noted by four providers, most of the denials they received were related to administrative requirements. Twelve providers indicated that one of the most common reasons for denial was that an alternate resource was available to the patient. Other common administrative denial reasons included the availability and accessibility of IHS facilities to deliver services, noted by seven providers, and failure to provide notification within 72 hours of the patient receiving emergency services, noted by six providers. Seven providers also stated that they received denials because the CHS program determined that the care was non-emergent or not within medical priority for which funding was available. In addition, eight providers stated that some denials may have occurred because CHS patients may not have had a clear understanding of CHS policies and procedures related to payment. Eight providers stated that CHS patients could benefit from education on CHS procedures, including the need to obtain a CHS program referral prior to receiving care and the understanding that a CHS program referral does not guarantee payment. Fourteen providers who we interviewed reported challenges obtaining timely payment from CHS programs. Seven of these providers stated that these delays occurred in obtaining a purchase order. However, six providers stated that after they obtained a purchase order from the CHS program, they received payment from IHS’s fiscal intermediary in a timely manner. In fiscal year 2010, IHS reported that the average number of days between receiving a provider claim and issuing a purchase order was 82 days, 4 days more than the agency’s target of 78 days for that fiscal year. Of the providers who we interviewed, 12 providers stated that it has taken several months, or in some cases years, to receive payment for CHS program claims. Seven providers said that these delays tended to occur when the CHS program’s funding for the fiscal year had been depleted. According to IHS officials, delays in issuing purchase orders can be attributed to several factors, including a shortage of the CHS program staff who process purchase orders and the lengthy amount of time it takes providers to send patient medical records needed to make a determination for CHS payment. Fourteen providers stated that the CHS program’s paper-based claims process required a lot of paperwork to be submitted, such as a patient’s medical records, or was otherwise time consuming. Twelve providers also stated that for some payers with which they interacted, including Medicare and Medicaid, they were able to process claims electronically, which in some cases also allowed them to electronically track a claim’s status. In contrast, to obtain payment for emergency care through the CHS program, providers have had to send paper copies of patient medical records and a paper claim to the CHS program to be reviewed. Seven providers stated that this process had led to delays because CHS staff may lose paperwork and then ask the provider to resubmit the information. However, seven other providers noted that they were electronically submitting claims for payment to IHS’s fiscal intermediary, or working with CHS programs to begin this process, which should reduce the amount of required paperwork. Some providers also stated that it was difficult to determine the status of claims while waiting for approval to be paid. Four providers said that when they contacted CHS program staff to determine the status of claims, the staff were not always able to provide the information. Of these providers, two said that CHS programs did not communicate the status of submitted claims. Additionally, one provider told us that one federal CHS program with which they interacted did not communicate to them when a claim had been denied. Instead, the CHS program provided no response to the provider’s claim for payment. IHS officials acknowledged that additional agency efforts toward improving customer service are needed to ensure that CHS program staff communicate more promptly with providers. Eighteen providers noted challenges receiving communications from IHS about CHS policies and procedures related to payment, including having had few, if any, formal meetings with program staff and a lack of training and guidance. For example, 10 providers stated that they had never met CHS program staff or did not meet regularly with them, although eight other providers said that they benefited from regular communications with CHS program staff, such as establishing good working relationships with CHS program staff and getting assistance in clarifying CHS program policies and procedures to receive payments. According to the Standards for Internal Control in the Federal Government and the Internal Control Management and Evaluation Tool, agency management should ensure that there are adequate means of timely and effective communication with, and obtaining information from, external stakeholders that have significant impact on the agency achieving its goals and an agency should employ many and various means of communications, such as policy and procedure manuals and Internet web pages. By not ensuring that its CHS programs have timely and effective communication with external providers about CHS policies and procedures related to payment, IHS has no reasonable assurance that the agency is achieving its objectives. The providers who we interviewed generally indicated that their understanding of the CHS program came from experience, rather than communications, including formal training and guidance from IHS. Twelve providers stated that they had at least a basic understanding of CHS policies and procedures for obtaining CHS payments. The providers we interviewed told us that the amount of training they received from IHS varied. While 3 of 4 providers in one IHS area stated that they received recent training from the staff of CHS programs or their area office, 13 providers in other areas told us that they had never received training from IHS staff or had not received training in many years. Of those 13 providers, 6 mentioned that they had not received educational materials, including guidance, about the CHS program. Instead, 6 providers stated that their knowledge of the CHS program had been self-taught or obtained from working with CHS program staff. In contrast, 7 providers stated that other payers with which they interacted provided regular on- site training, guidance manuals, or online resources that allowed them to learn about a payer’s payment policies. IHS officials said that the responsibility for educating providers is delegated to the area offices. According to IHS officials, during past meetings with area office staff, they have emphasized the importance of external provider training and shared area office best practices for educating providers. IHS headquarters officials also stated that, in 2009, they developed a CHS program manual for external providers and sent it to the area offices to be distributed to providers. However, IHS officials acknowledged that, given the complexity of the CHS program, additional agency efforts are needed to ensure that all IHS areas are engaged in external provider education. In the absence of training from IHS, one provider stated that it had developed its own training on the CHS program. This provider used the experience of one of its staff members who had previously worked for the CHS program to provide training to multiple health care facilities within its health system. However, that staff member had not received any training from either individual CHS programs or the area office since being hired by the provider 4 years ago and, therefore, would not have been aware of any policy changes IHS made during that time. Most providers who we interviewed generally reported that challenges with the CHS program, particularly denied payment for services, added to the burden of both patients and providers. Twenty-two providers stated that when care they provided was denied by the CHS program, they billed the patient. Of these providers, 3 stated that, because of the length of time that it took the CHS program to approve or deny a service, they started billing the patient even if a denial had not yet been received. For example, 1 provider stated that they used to wait as long as 4 years for CHS programs to make claims decisions, but they now bill the patient if they do not receive communication from CHS programs within a timeframe typical to that of other payers. Twelve providers told us that, for the care denied by CHS programs that was billed to patients, either they were not able to obtain payment or patients did not apply for provider payment assistance programs. Eleven providers stated that they were only able to collect a small portion of the care billed to American Indian and Alaska Native patients or patients for whom payment was denied. Of the 12 providers who discussed how uncompensated care is classified in their financial records, all indicated that it was considered bad debt if the patient was not able to pay for services or qualify for charity care. One provider estimated that it had a collections rate of about 1 percent for services billed to patients denied by the CHS program. The provider noted that while CHS patients accounted for about 30 percent of its patient population, they accounted for about 85 percent of the provider’s bad debt. Ten providers stated that when the patients’ bill was not paid, they were turned over to collections. In addition, 18 providers had a charity care program, which offered reduced charges or free care to patients who met income and other requirements and was available to patients whose care was denied for payment by the CHS program. However, 8 of these providers stated that patients for whom CHS program payment was denied generally did not apply for charity care, and 8 of the other 10 providers did not mention or did not have information on the number of patients denied by the CHS program that applied for charity care. Providers varied in whether they reported that this uncompensated care affected their operations. Ten providers, including five of the eight critical access hospitals that we interviewed, reported that the amount of uncompensated care associated with the CHS program affected them financially by, among other things, limiting their ability to purchase new equipment or resulting in increased costs to other patients. One critical access hospital stated that because of the uncompensated care associated with the CHS program, it was seeking new ownership. However, four providers who we interviewed told us that the amount of uncompensated care had not significantly affected them financially. Additionally, some providers sought payment from other resources for services delivered to patients. For example, eight providers, seven of which were larger than critical access hospitals, stated that they hired a benefits coordinator or were able to get their state health benefits agency to place a benefits coordinator at their facility to assist patients in applying for alternate resources, such as Medicaid. The providers who we interviewed told us that these burdens had varying effects on the delivery of care to patients. Nine of the 12 providers who discussed this issue with us stated that they provided care to patients regardless of their ability to obtain payment from the CHS program. In addition, the Emergency Medical Treatment and Active Labor Act (EMTALA) requires most hospitals to provide an examination and needed stabilizing treatment, without consideration of insurance coverage or ability to pay, when a patient presents to an emergency room for attention to an emergency medical condition. However, 3 of the 7 office-based providers that we interviewed said that when dealing with the CHS program, generally, they only saw patients who had obtained a CHS program referral. IHS’s CHS program serves as an important resource for American Indian and Alaska Native individuals who need health care services not available at IHS-funded federal and tribal facilities. Despite recent funding increases, most federal and tribal CHS programs that responded to our surveys reported that they did not have funds available to pay for all requested health care services for patients who otherwise met requirements, including emergent and acutely urgent care. However, IHS’s estimate of the extent to which unmet need exists in the CHS program is not reliable because of deficiencies in the agency’s oversight of the collection of unfunded services data on which it relies to develop this estimate. IHS’s acknowledgement of these limitations and the early efforts of its workgroup to explore additional options for estimating need are positive steps. However, IHS has not yet completed the development of its new method for estimating CHS program need using the FDI or made a decision about how it will use deferral and denial data to help estimate CHS program need. Further, as its workgroup has noted, reliably captured deferral and denial data on all patients would present the strongest evidence of CHS program need. Therefore, it continues to be important that the agency take steps to ensure that complete and consistent deferral and denial data are collected. IHS has not provided adequate oversight to ensure that the annual reports it receives from each area office and uses to estimate unmet need include data from all of their federal CHS programs. In addition, although the agency cannot require reporting by tribal CHS programs, its efforts to provide outreach have not been sufficient to encourage such reporting from all tribal programs. Without complete reporting from federal and tribal programs, IHS does not have complete data for its estimate of unmet need. In addition, the agency’s ability to determine the completeness of the data it collects and take steps to improve reporting is limited because its current template does not provide sufficient detail about which federal and tribal programs are reporting deferral and denial counts. As IHS responds to the future recommendations of its workgroup, the agency should ensure that it expeditiously addresses the weaknesses we identified in the deferral and denial data that provide the agency with information about program need. Given the decentralized nature of the CHS program, effective guidance, training, and oversight by IHS can help ensure that policies and procedures affecting its determination of need are consistently applied across CHS programs. Our survey results suggest that current agency practices have not ensured consistent recording of unfunded services by CHS programs. Documenting how IHS uses unfunded services data to assess CHS program need could help ensure that area offices and CHS programs maintain data collection practices that contribute to the reliability of IHS’s estimate of need. Given that CHS program funds may be depleted before the end of the fiscal year, it is important that CHS programs take steps to maximize the care that patients receive. However, they should not engage in practices that risk incurring obligations in excess of the available funding. IHS officials acknowledge that the guidance that IHS provides to CHS program staff on funds management may not be sufficient to ensure that CHS programs do not engage in problematic funds management practices. Effective communication with providers is an important element of IHS’s oversight to ensure proper CHS program management. The providers we spoke with noted challenges related to their participation in the CHS program that they said created a burden for themselves and their patients. Among their concerns was a lack of timely and effective communication with the individual CHS programs to determine whether or when CHS programs would provide payment for services provided to American Indian and Alaska Native patients. Timely and effective communication between IHS and providers is especially important to ensuring efficient program operations. As acknowledged by IHS officials, the complexity of the CHS program makes this communication particularly important. The challenges that providers described— determining patient eligibility for payment, contacting CHS programs with questions about claims, and ensuring the timely receipt of payment— would be mitigated by improved CHS program processes and communications, including training. To develop more accurate data for estimating the funds needed for the CHS program and improving IHS oversight, we recommend that the Secretary of Health and Human Services direct the Director of IHS to take the following eight actions:  ensure that area offices submit data on unfunded services from all conduct outreach and technical assistance to tribal CHS programs to encourage and support their efforts to voluntarily provide data that can be used to better estimate the needs of tribal CHS programs;  develop an annual data reporting template that requires area offices to report available deferral and denial counts for each federal and tribal CHS program;  develop a plan and timeline for improving the agency’s deferral and  develop written guidance, provide training, and conduct oversight activities necessary to ensure unfunded services data are consistently and completely recorded by federal CHS programs;  develop a written policy documenting how IHS evaluates need for the CHS program and disseminate it to area offices and CHS programs to ensure they understand how unfunded services data are used to estimate overall program needs;  provide written guidance to CHS programs on a process to use when funds are depleted and there is a continued need for services, and monitor to ensure that appropriate actions are taken; and  develop ways to enhance CHS program communication with providers, such as providing regular trainings on patient eligibility and claim approval decisions to providers. We provided a draft of this report to HHS for review and comment and subsequently met with HHS and IHS officials to obtain additional information. In its written comments, HHS indicated steps that IHS would take to implement some of our recommendations and discussed steps the agency was taking to implement a new method for estimating CHS program need. HHS and IHS officials subsequently provided us with clarification about the status of IHS’s plans for estimating program need and HHS submitted revised written comments. HHS’s letter and revised general written comments are reprinted in appendix III. We also provided tribal representatives with an opportunity to present oral comments and the representatives we spoke with primarily discussed the role of tribal programs in IHS’s needs assessment process. The comments from HHS and the tribal representatives are summarized below. In its original written comments, HHS commented that IHS is making efforts to address the problems identified in our draft report and provided additional information about the development of its new methodology for estimating program need. With regard to our first five recommendations to improve the collection of deferral and denial data from individual CHS programs, HHS agreed that these data are incomplete and inconsistent. HHS also agreed that such data could provide a reliable estimate of need if they were universally and uniformly collected. However, HHS indicated that IHS’s proposed new method for estimating CHS program need by adapting its existing FDI would provide IHS with a sufficiently reliable estimate of CHS program need without relying on deferral and denial data. In our draft report, we acknowledged that IHS has taken positive steps to identify and examine the weaknesses in its current data and explore other sources of data to estimate CHS program need, such as exploring the use of the FDI method. As HHS noted in its comments, the IHS Director’s Workgroup proposing this methodology has not yet issued a final recommendation to the Director of IHS for approval. Following the receipt of HHS’s original written comments, we met with HHS and IHS officials to obtain clarification about the status of IHS’s plans for assessing CHS program need. The officials confirmed that the agency was continuing to develop the new method by adapting the FDI methodology to measure CHS program need. They said that the new method had not yet been formally recommended to the Director and that IHS did not have a formal agency approved plan for implementing it. IHS officials also indicated the agency had not yet determined the extent to which deferral and denial data would continue to be used by IHS headquarters to estimate program need if the FDI method is adopted. However, they indicated that until this decision is made, the agency will continue to collect deferral and denial data from the area offices. As we noted in our draft, the FDI method would be adapted to provide IHS with an estimate of funding needed to provide care to American Indians and Alaska Natives through the CHS program at a level comparable to the care available through the health insurance program available to federal employees. IHS’s Director’s Workgroup previously indicated that reliably captured deferral and denial data on all patients would present the strongest evidence of CHS program need. Given that the proposed FDI methodology is still in early development and IHS plans to continue collecting deferral and denial data, we believe that expeditious implementation of our first five recommendations is vital to ensure the data IHS uses to calculate program need are accurate. With regard to our other three recommendations, HHS described in its comments the steps that IHS would take to develop a written policy on how IHS evaluates CHS program need and provide training to CHS program officials on the process to use when funds are depleted. HHS also indicated that the IHS Director’s Workgroup would be providing recommendations for enhancing communication with providers. HHS also provided us with technical comments, which we incorporated as appropriate. Subsequent to our conversation with HHS and IHS officials, HHS submitted revised comments to our report. In the revisions, HHS clarified that the FDI method represents one of multiple options for estimating unmet need that IHS’s Director’s Workgroup is considering and clarified that the development of this new methodology is still ongoing. The revisions HHS made to its written comments do not substantively change our response. We also provided tribal representatives, including the 177 tribal CHS programs we surveyed and the three tribal advocacy groups we interviewed, the opportunity to provide oral comments on a draft of this report. Representatives from 11 tribal CHS programs and two tribal advocacy groups provided comments. The most frequent comment related to our recommendation that IHS provide outreach and technical assistance to tribal CHS programs to encourage them to submit data that can be used to assess CHS program need. Specifically, representatives from 2 tribal CHS programs stated that more technical assistance from IHS would be helpful, because it is important that the needs of the tribal programs be captured in IHS’s needs assessment. A tribal advocacy group representative noted that some tribes have chosen not to collect deferral and denial data because of its cost burden. A representative from a tribal CHS program noted the added cost of tracking these data was justified by the benefit they provide to IHS’s budget process. In addition, a tribal representative expressed concern that our finding on the accuracy of IHS’s estimate of need could be interpreted to suggest that the actual level of need is lower than what IHS is estimating. In our report, we did not examine whether or not IHS’s estimate of need over- or under- estimates the actual level of unfunded need, but rather found that the estimate is not reliable because of deficiencies in the agency’s oversight of the collection of unfunded services data. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In this report, we examined (1) the extent to which the Indian Health Service (IHS) ensures the data it collects on unfunded services are accurate to determine a reliable estimate of contract health services (CHS) program need, (2) the extent to which federal and tribal CHS programs report having funds available to pay for contract health services, and (3) the experiences of external providers in obtaining payment from the CHS program. To address part of our work for our first two objectives, we administered two surveys—one each to federal and tribal CHS programs. From March 2010 through August 2010, we obtained lists of federal and tribal CHS programs from each area office, from which we identified 66 federal CHS programs and 177 tribal CHS programs. We administered a web-based survey to all of the federal CHS programs from October 2010 through January 2011. In addition, from September 2010 through January 2011, we administered a mixed-mode survey—both web-based and by mail—to all of the tribal CHS programs; this survey was blinded to maintain the anonymity of respondents. To ensure the clarity and precision of our survey questions, we pretested our federal CHS program survey with officials from IHS and our tribal CHS program survey with officials from three tribal health advocacy groups and a tribal health official. We analyzed complete survey data from all 66 federal CHS programs, for a response rate of 100 percent, and 103 of 177 tribal CHS programs, for a response rate of 58 percent. The results from our survey of tribal CHS programs are not generalizable to all tribal CHS programs because we did not receive responses from all tribal CHS programs and tribal programs vary due to the flexibility tribes have in administering their programs. We relied on the data as reported by the CHS program officials who were identified as the primary contacts for the CHS program and did not independently verify these data or ask IHS to verify them. However, we reviewed all responses for reasonableness and internal consistency. For our survey of federal CHS programs, when necessary, we followed up with the program officials who completed our survey for clarification. Based on these activities, we determined these data were sufficiently reliable for the purpose of our report. We also conducted site visits to IHS area offices based in Oklahoma City, Oklahoma and Portland, Oregon in March and April, 2010. During these site visits, we interviewed area office officials and representatives from a total of four federal and eight tribal CHS programs located in those areas. In addition, we interviewed officials from IHS headquarters and each of IHS’s 12 area offices to discuss oversight of the CHS program, and spoke with three tribal health advocacy groups. We also examined IHS oversight, such as the provision of policy and guidance, conducted to ensure that CHS programs consistently and completely record and report unfunded services data. We compared these oversight activities to the standards described in the Standards for Internal Control in the Federal Government and the Internal Control Management and Evaluation Tool. We also reviewed our cost estimating guide to assess procedures for determining a reliable estimate for budgetary purposes. To examine the experiences of external providers in obtaining payment from the CHS program, we interviewed representatives from hospitals and office-based health care providers from selected IHS areas. We selected four areas from which to identify providers based on their fiscal year 2009 per capita CHS program funding and dependency on CHS funds for hospital services. We estimated per capita funding using the agency’s fiscal year 2009 user population estimates and allocation of CHS program funds. To estimate dependency, we used an IHS measure of dependency it uses to allocate certain funds to the area offices. It measures whether patients in an area have practical access to IHS- funded federally and tribally operated hospitals. If the patients do not have access to such facilities, then they are considered to be more dependent on the CHS program for hospital services and therefore, the area receives additional funding. The four areas we selected were Bemidji, Billings, Phoenix, and Oklahoma City, which represent areas that were above or below average for each of our selection criteria. (See table 3.) In fiscal year 2009, the four areas represented 43 percent of the IHS user population and received 37 percent of CHS funding. Within these four areas, we selected 23 providers—16 hospitals and 7 office-based providers—to interview. Most of these providers were identified through our survey of federal CHS programs as providers who provided the highest volume of care to CHS program users in fiscal year 2009. In addition, we also identified providers who interact frequently with CHS programs through our discussions with state hospital associations and a tribal health advocacy group. Given the small number of providers in our sample and our process for selecting them, the results from these interviews are not generalizable to all providers interacting with the CHS program. We asked providers about their experiences obtaining effective and timely communication related to the payment process, such as training or guidance on determining patient eligibility for CHS program payment of services and determining the status of claims, and compared their experiences with the standards described in the Standards for Internal Control in the Federal Government and the Internal Control Management and Evaluation Tool. We asked providers a standard set of open-ended questions and we did not independently validate their reported experiences, but we did discuss many of their comments with IHS officials. We conducted this performance audit from January 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Indian Health Care Amendments of 1988 established the Catastrophic Health Emergency Fund (CHEF) to meet the medical costs associated with treating catastrophic illnesses or victims of disasters. CHEF is administered centrally within the Indian Health Service (IHS) and reimburses federal and tribal contract health services (CHS) programs on a first-come first-served basis for CHS program cases with costs exceeding the threshold set annually within the range established by law. Specifically, CHS programs pay for the services and then request reimbursement from IHS for expenses over the threshold, which was $25,000 in fiscal year 2009. In fiscal year 2009, IHS reimbursed 1,223 cases at a total cost of $31 million; in fiscal year 2010, IHS reimbursed 1,747 cases at a total cost of $48 million. The top three diagnostic categories funded in fiscal year 2010 were injuries, cancer, and heart disease. When CHEF funds are depleted, requests for reimbursement are denied by IHS. As part of IHS’s needs assessment for the CHS program, the agency determines the number of CHEF requests for reimbursement that were denied and then uses the actual billed charges that were submitted by CHS programs to determine the cost of these services. In fiscal year 2009, IHS denied 1,065 cases totaling $24 million; in fiscal year 2010, it denied 865 cases totaling $14 million. However, IHS speculated that this may underestimate the need for CHEF reimbursement because additional cases may have qualified for CHEF reimbursement, but CHS programs may not have submitted a request for reimbursement due to the depletion of CHEF before the end of the fiscal year. Of the 66 federal CHS programs we surveyed, 52 reported that they submitted requests for CHEF reimbursement in fiscal year 2009. Of these, 12 reported that they did not continue to submit requests for CHEF reimbursement once the CHS program learned that CHEF funds were depleted. Of the 66 federal CHS programs we surveyed, 14 reported that they did not submit any requests for CHEF reimbursement in fiscal year 2009. The most common reasons they reported for not submitting requests for CHEF reimbursement were that the CHS program did not experience any cases costing over $25,000 (8 of 14 federal CHS programs) and staffing shortages (5 of 14 federal CHS programs). Of the 103 tribal CHS programs who responded to our survey, 46 submitted requests for CHEF reimbursement in fiscal year 2009. Fifty-three of the tribal CHS programs reported that they did not submit requests for CHEF reimbursement. The most common reasons they reported for not submitting requests for CHEF reimbursement were that the CHS program did not experience any cases costing over $25,000 (31 of 53 tribal CHS programs) and tribal programs were unable to pay for the first $25,000 of expenses (13 of 53 tribal CHS programs). In addition to the contact names above, Catina Bradley, Martha Kelly, and Suzanne Worth, Assistant Directors; George Bogart; Zhi Boon; William Hadley; Giselle Hicks; Darryl Joyce; Hannah Locke; Sarah-Lynn McGrath; Jasleen Modi; Lisa Motley; Laurie Pachter; and Mario Ramsey made key contributions to this report.
The Indian Health Service (IHS), an agency in the Department of Health and Human Services (HHS), provides health care to American Indians and Alaska Natives. When care at an IHS-funded facility is unavailable, IHS's contract health services (CHS) program pays for care from external providers if the patient meets certain requirements and funding is available. The Patient Protection and Affordable Care Act requires GAO to study the adequacy of federal funding for IHS's CHS program. To examine program funding needs, IHS collects data on unfunded services--services for which funding was not available--from the federal and tribal CHS programs. GAO examined (1) the extent to which IHS ensures the data it collects on unfunded services are accurate to determine a reliable estimate of CHS program need, (2) the extent to which federal and tribal CHS programs report having funds available to pay for contract health services, and (3) the experiences of external providers in obtaining payment from the CHS program. GAO surveyed 66 federal and 177 tribal CHS programs and spoke to IHS officials and 23 providers. Due to deficiencies in IHS's oversight of data collection, the data on unfunded services that IHS uses to estimate CHS program need were not accurate. Specifically, the data that IHS collected from CHS programs were incomplete and inconsistent. For example, 5 of the 66 federal and 30 of the 103 tribal CHS programs that responded to GAO's survey reported that they did not submit these data to IHS in fiscal year 2009. Also, the format of IHS's annual request has not provided the agency with complete information to determine which programs submitted these data. In addition, individual CHS programs reported inconsistencies in how they recorded information about a specific type of unfunded service that IHS uses in its assessment of need. A reliable estimate of need will require complete and consistent data from each of the individual CHS programs. In November 2010, IHS created a workgroup to examine weaknesses in its current data and explore other sources of data to estimate need. IHS officials expect the workgroup to make a recommendation to the IHS Director by the end of calendar year 2011 that IHS adopt a new method of estimating need. As of September 2011, IHS was continuing to develop this new method and officials indicated that deferral and denial data would continue to be collected until it makes further decisions about its needs assessment methodology. Sixty of the 66 federal and 73 of the 103 tribal CHS programs that responded to GAO's survey reported that in fiscal year 2009 they did not have CHS funds available to pay for all services for which patients otherwise met requirements. Some federal CHS programs reported continuing to approve services for patients when sufficient funds were not available; IHS officials told us they were unaware this practice was occurring. In contrast, other federal CHS programs reported using a variety of strategies to help patients receive services outside of the CHS program in order to maximize the care that they could purchase. For example, some federal CHS programs reported helping patients locate free or low-cost health care. Tribal CHS programs reported using a variety of strategies not available to federal CHS programs. For example, 46 of 103 tribal CHS programs that responded to GAO's survey reported supplementing their CHS programs' funding with tribal funds, which are earned from tribal businesses or enterprises. Most external providers that GAO interviewed described challenges in the CHS program payment process. For example, when patients presented for emergency services, 13 of 23 providers reported challenges determining which services would be approved for payment because, unlike other payers, they cannot check a patient's eligibility electronically. Eighteen providers noted challenges receiving communications from IHS about CHS policies and procedures related to payment, including having had few, if any, formal meetings with program staff and a lack of training and guidance. IHS officials acknowledged that the complexity of the CHS program makes provider education important. Most providers said that these challenges contributed to patient and provider burden. GAO recommends that HHS direct IHS to ensure unfunded services data are accurately recorded, CHS program funds management is improved, and provider communication is enhanced. HHS noted how IHS would address the recommendations; describing the proposed new method to estimate need. IHS's steps will address some recommendations, but immediate steps are needed to improve the collection of unfunded services data to determine program need.
VA policy is to allocate comparable resources for comparable workloads in its 22 health care networks as an important step in ensuring equitable access to care for the nation’s veterans. To achieve this allocation in its national health care system, VA has used VERA since fiscal year 1997 to prospectively allocate resources to the networks. VERA allocates nearly 90 percent of VA’s medical care appropriation in six categories: complex patient care, basic patient care, equipment, nonrecurring maintenance, education support, and research support. Resources for the first four categories are allocated on the basis of patient workload and account for approximately 96 percent of the resources VERA allocates. Allocations for education support and research support are based on workload measures specific to those activities within the VA health care system. Developed in response to a legislative mandate, VERA was designed to correct regional inequities in resource allocation created by shifts in the veteran population from the northeast and midwest to the south and west (see fig. 1) without a corresponding shift in resources. The resources did not shift before VERA was implemented because resource allocation was based primarily on facilities’ historical expenditures. VA expects that veteran population shifts from the northeast and midwest to the south and west will continue at least through 2020. Two other major changes to VA health care provision accompanied the implementation of VERA as a result of the Veterans’ Health Care Eligibility Reform Act of 1996. The first change was a major shift in VA health care delivery from an inpatient to an outpatient emphasis that was consistent with changes in health care delivery outside of VA. The act eliminated restrictions that previously prevented VA from treating some veterans in outpatient care settings, allowing VA to shift its focus from inpatient to outpatient care delivery. For example, VA no longer had to admit certain veterans to an inpatient setting to make them eligible for outpatient treatment or to receive prosthetic devices, such as crutches. As a result of eligibility reform, VA has been successful in shifting medical care to outpatient settings by taking advantage of advances in medical technology and practices, such as laser, endoscopic surgery, and other less invasive surgical techniques. VA has also identified alternatives to inpatient care, such as home-based care, for many chronically ill patients. From fiscal year 1996 through fiscal year 2000, VA closed almost 24,000 acute inpatient beds, a 52 percent reduction systemwide. During this time period, VA’s inpatient admissions decreased and outpatient visits increased from approximately 29 million to 40 million visits, a 36 percent increase systemwide. The second change was the introduction of a veterans’ enrollment system to manage access in relation to available resources due to the expected increase in demand on the VA system as a result of the new eligibility rules. As required by the act, VA established seven priority categories for enrollment. A higher priority for enrollment is given to veterans who have service-connected disabilities, lower incomes, or other statuses such as former prisoners of war. These higher priority enrollees are ranked in priority order from 1 through 6. The lowest enrollment priority is given to veterans not included in priorities 1 through 6, referred to as Priority 7 veterans. These veterans are primarily nonservice-connected veterans with higher incomes. The act requires VA to restrict enrollment consistent with these enrollment priorities if sufficient resources are not available to provide care that is timely and acceptable in quality to all priority categories. If needed, enrollment restrictions would begin with the lowest priority category. For, fiscal year 2002, VA has decided to continue enrolling veterans in all priority categories. VERA has been a key part of VA’s strategy to change its health care system. First, VERA shifted substantial resources among regions reflecting shifts in workload. Second, VERA, in concert with other VA initiatives, has provided an incentive for networks to serve more veterans. VERA has shifted substantial resources from networks located primarily in the northeast and midwest to networks located in the south and west (see fig. 2). VERA shifted approximately $921 million among networks in fiscal year 2001 compared to what allocations would have been if networks received the same proportion of funding they received in fiscal year 1996, the year before VERA was implemented. This included additional resources Congress appropriated from fiscal year 1996 through fiscal year 2001. VERA shifted the most resources—approximately $198 millionto Network 8 (Bay Pines), and VERA shifted the most resources from Network 3 (Bronx)approximately $322 million. The shift occurred because VERA allocated resources based primarily on patient workload rather than continuing VA’s prior process of incrementally funding facilities based on historical expenditures. VERA’s implementation resulted in 10 of VA’s 22 networks receiving a smaller share of VA’s medical care appropriation in fiscal year 2001 than in fiscal year 1996. However, because VA’s total medical care appropriation rose 22 percent during this period, all but two of these networks received more resources in fiscal year 2001 than in fiscal year 1996. The two networks with fewer resources from fiscal year 1996 to 2001 were Network 1 (Boston) and Network 3 (Bronx), which experienced 1 percent and 10 percent declines, respectively. VA has also used VERA as one component of a larger strategy to improve access to care by increasing the number of veterans treated. Because VERA allocates resources based on workload, it provides incentives for networks to increase the number of veterans treated. The number of veterans treated nationally in VA, in all priority groups, increased from 2.6 million in fiscal year 1996, the year before VERA was implemented, to 3.8 million in fiscal year 2001, an increase of 47 percent. All 22 networks contributed to this increase (see fig. 3). This includes networks from which VERA shifted resources. VA’s reduction in inpatient care, closure of acute care beds, shift in emphasis to less expensive outpatient care delivery, and a 22 percent increase in VA’s annual medical care appropriation since fiscal year 1996 have provided additional capacity allowing networks to increase workloads. VERA’s design promotes the allocation of comparable resources for comparable workloads to VA’s 22 health care networks consistent with principles used by other payers, such as the Medicare and Medicaid programs, and expert views on the design of payment systems. VERA allocates resources based primarily on networks’ patient workloads. To ensure the comparability of networks’ resources with their workloads, VERA adjusts these allocations for factors beyond networks’ control, namely patient health care needs and certain local costs. By adjusting allocations only for costs beyond a network’s control, VERA holds networks accountable for providing services efficiently. Also, VERA provides protection for patients from the risk that a health care network would not be able to provide services because its expenditures exceed available resources. VERA allocates resources primarily on the basis of network patient workload. Each network receives an allocation based on a predetermined dollar amount per veteran served. This is consistent with how other federal health care payers allocate resources to managed care plans to care for their patient workload. Because VERA uses workload to allocate resources, networks that have more patients generally receive more resources than networks that have fewer patients (see table 1). However, allocation adjustments result in some situations in which networks with fewer patients receive higher total and per patient allocations. For example, Network 3 (Bronx) received a larger VERA allocation in fiscal year 2001 than Network 9 (Nashville) even though Network 3 (Bronx) had a smaller workload. By receiving funding based on workload, VA’s health care networks have an incentive to focus on aligning facilities and programs to attract patients rather than focusing on maintaining existing operations and infrastructure regardless of the number of patients served. VERA seeks to ensure that comparable resources are allocated for comparable workloads by adjusting for differences in networks’ patient health care needs and certain local costs in calculating networks’ allocations. Without these adjustments, networks with justifiably higher costs could face pressure to compromise access to care or lower health care quality, while networks with lower costs could receive more resources than needed. To prevent this problem, VERA, like other federal health care payment systems, makes adjustments to its per patient allocations or capitation amounts. VERA adjusts for patient health care needscase mixby first classifying patients into categories by overall level of health care need and then by setting capitation amounts for each of these patient categories. VERA classifies patients into one of three categories according to the level of health care needs and associated costs. The first category is complex care, which includes patients who generally require significant high-cost inpatient care as an integral part of their rehabilitation or functional maintenance, and is about 4 percent of VA’s workload. This category includes most patients in VA’s special disability programs such as those with spinal cord injuries and serious mental illness. The second category is basic vested care, which includes patients who have relatively routine health care needs and are principally cared for in an outpatient care setting. These patients84 percent of VA’s workloadrely primarily or completely on VA for meeting their health care needs, may require short- term inpatient admissions, and typically require significantly fewer resources than complex care patients. The third category is basic non- vested care which is 12 percent of VA’s workload. This category includes patients who also have relatively routine health care needs but receive only part of their care through VA, are less costly to VA than basic vested patients, and have not undergone a comprehensive medical evaluation by a VA practitioner. The adjustments to capitation amounts for each category reflect whether patients in a category are more or less costly than patients in another category. These adjustments, or case-mix weights, determine what proportion of VERA resources will be allocated to networks to care for patients in each case-mix category, such as complex care. As a result, VERA’s patient case-mix adjustment provides more funding to networks with greater proportions of complex care patients. For example, if two networks have the same number of patients but one has more complex care patients, it will receive a greater allocation because the VERA case- mix weight for complex care is higher. In addition, VERA adjusts for uncontrollable geographic price differences in the resources it allocates. These differences result primarily from variations in federal employee pay rates in different parts of the country. VERA makes this adjustment by applying a price adjustment factor to each network’s allocation. The adjustment lowers the VERA allocation for networks located in lower cost areas and raises the allocation for networks located in higher cost areas. In fiscal year 2001, Network 8 (Bay Pines) had the largest decrease resulting from the geographic price adjustment 2.8 percent. Network 21 (San Francisco) had the largest increase resulting from the geographic price adjustment 6.3 percent. Through fiscal year 2001, this adjustment was for services provided only by VA employees. Beginning in fiscal year 2002, VA expanded the geographic price adjustment to all VERA allocations by including contract labor costs and contract nonlabor purchases, such as energy. VERA’s allocation of resources based on workload with adjustments only for costs beyond the networks’ control aims to promote equity and efficiency. To promote equity, VERA adjusts network allocations by case mix and geographic price to standardize measures of workload and resources so that each network receives comparable allocations for comparable workloads. To create an efficiency incentive, VERA provides fixed capitation amounts for patient categories that are the same for each network and are intended to reflect VA’s average costs instead of historical local costs. Using fixed capitation amounts is consistent with how other health care payers provide managed care plans with an incentive to operate efficiently by placing them at risk if their expenses exceed the payment amount. VERA also provides protection of patients from the risk that a health care network would not be able to provide services because its expenditures exceed available resources. VERA does this annually through the National Reserve Fund which provides supplemental resources to networks when they have difficulty operating within their available resources. VA’s National Reserve Fund is used to cover network requests for supplemental allocations over and above networks’ annual VERA allocations and other sources of revenue. For fiscal years 1999 through 2001, VA has set the National Reserve Fund amount at $100 million using a combination of annual and carry-over funds. Since fiscal year 1999, resources distributed through the National Reserve Fund have averaged approximately 1 percent of total VERA allocations and supplemented VERA allocations in six networks. Although VERA’s overall design is a reasonable approach to allocate resources, we identified weaknesses in its implementation. First, VERA’s calculation to ensure the comparability of networks’ resources with their workloads and their patient health care needs is not as accurate as it could be. Second, the process for providing supplemental resources through the National Reserve Fund process does not provide adequate information to determine the extent to which networks need supplemental funding as a result of potential problems in VERA, network inefficiency, or other factors. VERA’s calculation of networks’ workloads excludes most higher income veterans without a service-connected disability—a growing proportion of VA’s users. In addition, VERA does not account for variation in patients’ health care needs and related costs among networks as accurately as it could. When VERA was established, the number of higher income veterans treated without a service-connected disability was small—approximately 108,000 or about 4 percent of the total number of veterans treated in fiscal year 1996. Because of their small numbers and the expectation that collections from copayments, deductibles, and third-party insurance reimbursements would cover the majority of their costs, VERA did not include most of these higher income veterans in basic care workload. However, the number of these veterans treated increased greatly in recent years and represent about 95 percent of VA’s Priority 7 health care enrollment category. The number of Priority 7 veterans treated increased to approximately 827,722 users (see fig. 4). Priority 7 veterans comprised 22 percent of VA’s total fiscal year 2001 patient workload. This rapid growth in the number of Priority 7 veterans treated has occurred even though networks do not receive additional VERA allocations for the majority of this workload and collections covered only 24 percent of Priority 7 veterans’ costs in fiscal year 2000. Networks pay for most of the costs of Priority 7 services through VERA allocations made mostly on the basis of non-Priority 7 workload. The omission of these veterans from VERA’s workload calculation creates an inequitable allocation of resources across networks because networks’ proportion of Priority 7 veterans treated varies (see fig. 5). For example, in fiscal year 2001, Priority 7 users were 32 percent of Network 14’s (Lincoln) total veterans treated compared to the VA average of 22 percent. Consequently, networks with a higher proportion of Priority 7 veterans, like Network 14 (Lincoln), have fewer resources per patient to treat veterans than networks with a lower proportion of Priority 7 veterans. VA assessed the possibility of including Priority 7 veterans in VERA’s basic vested workload. However, it had concerns that including Priority 7 veterans in VERA workload would create a possible incentive to serve higher income veterans at the expense of service-connected and low- income veterans. VA considered providing a capitation amount for Priority 7 veterans that was less than the average cost of their care. However, rather than including Priority 7 veterans in the workload calculations with a reduced capitation amount, VA decided instead to pursue other options. One of the options VA is examining is the effect of changing the income threshold used to classify enrolled veterans. Specifically, the current uniform national income standard used in part for determining Priority 7 status would be replaced with a regional income standard to account for regional differences in the cost of living. This would change the status of some Priority 7 veterans in high-cost regions to low-income veterans— who are included in VERA’s workload calculation. Although adopting a regional income threshold could improve the equity of resource allocation, the alignment of workload with resources would still be compromised if some networks continue to have disproportionate numbers of the remaining Priority 7 veterans. Inclusion of Priority 7 veterans in VERA basic vested care workload would increase the comparability of resources among networks per patient treated. This would move resources from networks with a smaller proportion of Priority 7 veteran workload to networks with a larger proportion of Priority 7 veteran workload. If, for example, Priority 7 basic vested veteransthose who rely primarily or completely on VA for meeting their health care needs—were capitated at half the average national cost of their care, as VA had considered, this would have increased the allocation to 9 networks in the northeast and midwest and decreased the allocation to 10 networks in the south and west in the fiscal year 2001 VERA allocation (see fig. 6). Although VERA adjusts network allocations for cost differences resulting from the mix of patients networks serve, it does not do so as accurately as it could. This is because the case-mix weights assigned to each category of patients are based on historical cost data from fiscal year 1995 and VERA only uses three case-mix categories to allocate resources. Case-Mix Weights Based On Historic Data Do Not Reflect Changes In VA Health Care VERA uses case-mix weights based on VA health care expenditures in fiscal year 1995 to allocate resources for basic and complex care workload. These weights are determined by the share of resources spent on basic and complex care in that year  61.6 percent of expenditures for basic care and 38.4 percent for complex care. For the VERA allocation in fiscal year 2001, for example, $6.2 billion was available for complex care (38.4 percent) and $10.0 billion was available for basic care (61.6 percent). These case-mix weights, however, have not been updated to reflect the health care that VA is providing. Because of VERA, VA Eligibility Reform, and other VA initiatives, the number of basic care patients has increased since fiscal year 1995 while the number of complex care patients has remained relatively constant. The rising proportion of basic care patients has contributed to a greater proportion of VA expenditures for basic care and a smaller proportion of expenditures for complex care. By fiscal year 1999, 66.9 percent of expenditures were for basic care and 33.1 percent were for complex care. Adjusting capitation amounts to reflect current expenditures for basic and complex care would result in an approximately 9 percent increase in the basic care capitation amounts and about a 14 percent reduction in the complex care capitation amount (see table 2). VA considered updating the weights for basic and complex care based on the most recent available costs. VA officials told us they have maintained the fiscal year 1995 case-mix weights because using more current expenditure data that would lower the allocation for complex care and increase the allocation for basic care could be seen as a weakening of VA’s commitment to serve veterans with complex care needs, such as those with spinal cord injuries or serious mental illness. However, continuing to base VERA case-mix weights on fiscal year 1995 expenditures has not ensured that resources were spent on complex care patients. VERA, like other allocation systems, provides networks with resources but it does not require networks to spend resources in a particular way. Rather, VA program guidance and network management decisions determine how resources are spent. Eighteen of 22 networks spent less for complex care than they received based on their complex care workload in fiscal year 2000, the most recent year for which expenditure data are available (see table 3). As a result, the proportion of VA’s total expenditures on complex care has declined since fiscal year 1995 even though the proportion of VERA’s allocation for complex care has remained constant. However, VA has decided to defer action on using the most recently available costs pending further study of how costs and workload vary in complex care categories among networks. Aligning VERA case-mix weights proportionally with current expenditures is one way to better reflect how health care is delivered in VA. Doing so, however, assumes that expenditures alone are an appropriate measure of health care need. This is not always the case. For example, if health care in a particular case-mix category is not being provided efficiently, using expenditure data alone would result in a higher than necessary case-mix weight. This would lead to excess resource allocation for the case-mix category. On the other hand, using expenditure data only would result in a lower case-mix weight than appropriate if health care providers are not using more expensive treatments when needed to provide clinically appropriate care. This would lead to insufficient resource allocation for the case-mix category. As a result, setting case-mix weights may begin with consideration of current expenditures, but ultimately must use the best available data to reflect efficiency and clinically appropriate care. The Small Number Of Case-Mix Categories In VERA Does Not Accurately Adjust For Network Differences In Veterans’ Health Care Needs VERA uses only three case-mix categories—complex, basic vested, and basic non-vested—to adjust for differences in health care needs and related resource requirements for veterans. These three case-mix categories are based on 44 patient classes VA uses to classify its patients. Using all 44 patient classes as case-mix categories would more accurately adjust for differences in needs and related resource requirements because the average costs of patients in the classes within the VERA categories vary significantly and can be dramatically higher or lower than their capitation amounts for the current three case-mix categories (see table 4). For example, the national average patient cost for domiciliary careone type of complex carein fiscal year 2000 was roughly $17,000 less than the $42,153 capitation amount for complex care, while the average patient cost for ventilator-dependent care  another type of complex carewas about $121,000 more than the complex care capitation amount. Our analysis shows that considerable variation exists among networks in the type of workload represented by VERA’s three case-mix categories, which limits VERA’s ability to allocate comparable resources for comparable workload. VERA provides more resources to networks, relative to their costs, that have proportionately more workload in less expensive patient classes, such as domiciliary care, than other networks. VERA provides fewer resources to networks, relative to their costs, that have more workload in more expensive patient classes, such as ventilator- dependent care. Using VA’s current 44 patient classes rather than the three case-mix categories VERA used in fiscal year 2001 would result in a significant movement of resources for some networks because of the variation by network in the type of workload (see fig. 7). This would move resources from networks having proportionately fewer patients in expensive patient classes to networks having proportionately more patients in expensive patient classes, resulting in an average movement of resources of 2 percent per network. In 1998, VA conducted a similar analysis using 54 patient classes for allocation and found that this would have moved a significant amount of resources among networks, an average of 4 percent per network. The analysis further concluded that using only 7 of the 54 classes achieved nearly the same result. A 1998 Price Waterhouse analysis of VERA also concluded that additional case-mix categories would increase equitable resource allocation. VA officials told us they have not introduced more than three case-mix categories because VA wants VERA to be easily understood by stakeholders. While using more case-mix categories can increase the accuracy of allocations, the literature and experts we consulted suggest that a case- mix classification system needs to address two concerns in order to prevent providers from receiving inappropriately high levels of resources. First, having a larger number of case-mix categories may provide more opportunities for networks to inappropriately classify patients to receive the highest capitation amount. However, increasing the VERA case-mix categories from three to a higher number, but not necessarily 44, may strike an appropriate balance between improved allocation and the need to control for potential inappropriate coding of patients into higher capitation categories. Second, basing case-mix categories in part or in whole on utilization of services provides the incentive to overuse services. For instance, in VERA, a patient who receives nine home-based primary care visits is categorized in basic vested care with a capitation amount of $3,126; however, a patient who receives 10 visits is categorized in complex care, which has a capitation amount of $42,765. Consequently, if networks increase the number of such visits, they can increase their funding more than 13-fold. Currently, 22 of VA’s 44 patient classes incorporate utilization factors in classifying patients. These utilization factors are found primarily in the patient classes for extended and residential long-term care and chronic mental health services and for classifying basic non-vested patients. Replacing utilization criteria with diagnosis and functional measures where possible in VERA’s case-mix categories would reduce the incentive to overuse services, especially for complex care patients. Most VERA complex care patient classes are based in part on some measure of service utilization because of the difficulty in predicting the costs of these classes based solely on diagnostic data. Because complex care costs are high and unusually difficult to predict, the literature and experts we consulted suggest that it is prudent to partially insure networks from such unpredictable costs. Therefore, it may be advantageous to use a mechanism to help providers, such as VA’s health care networks, cope with their highest cost complex care patients by providing additional resources for their care based on a formula. If VA used such a funding mechanism, networks with complex care patients in the 99th percentile of cost, for example, would receive the network complex care capitation amount plus a predetermined percentage of the cost above the capitation amount. The additional funds above the capitation amount would partially offset the network’s expenses for high-cost complex care patients. Resources for this funding mechanism could be set aside as part of the National Reserve Fund. Currently, VA is exploring alternative case-mix classification systems, such as Diagnostic Cost Groups (DCG), that could provide more case-mix categories, classify patients based on nonutilization criteria, and better predict costs for acute care patients. DCGs place patients into different groups based on patient demographics and medical diagnoses. However, VA researchers have found that the DCG diagnosis-based system may not be sufficient to allocate resources for certain complex care patient classes. Predicting the costs of many complex care patients is problematic because complex care patients, including those with mental illness and those in extended care settings, may have the same diagnosis but may need very different levels of treatment and support. VA is studying the possibility of supplementing a diagnoses-based system with utilization information in order to better predict the costs of complex care patients. Implementing changes to VERA could better align resources with workload for VA’s 22 health care networks by addressing case mix and workload issues. Incorporating all 44 of the current case-mix categories, updating case-mix weights to reflect the current distribution of expenditures, and funding Priority 7 basic vested veterans at 50 percent of costs would better align resources and workloads. Incorporating the 44 case-mix categories would have the largest effect on resource allocation. The combined effect of these changes would provide additional resources to some northeastern and midwestern networks and reduce resources for some southern and western networks (see fig. 8). The allocation change represents about 2 percent of networks’ budgets, but is more substantial for some networks. Network 1 (Boston) would get approximately a 5 percent increase and Network 20 (Portland) approximately a 5 percent decrease. These changes would better align approximately $200 million with workload. VA has focused its process for administering the National Reserve Fund almost solely on providing supplemental resources to networks to get through a fiscal year but has not included in this process an examination of the root causes of networks’ needs for supplemental resources. To operate the National Reserve Fund, VA, for the last 3 fiscal years (1999 – 2001), has set aside about 1 percent of the VERA allocation in anticipation of networks requiring supplemental resources. No networks requested supplemental funding in fiscal years 1997 and 1998. However, six networks have requested supplemental funding from fiscal year 1999 to fiscal year 2001 (see table 5). Supplemental allocations to four networks in fiscal year 2001 totaled $220 million. Officials in 10 of 22 networks told us, in June 2001, that they anticipated requesting supplemental funding at least once from fiscal years 2002 through 2006. VA has used three different approaches to determine whether networks requesting additional resources would receive supplemental allocations. In fiscal years 1999 and 2000, VA created teams consisting of staff from networks not requesting supplemental funds. These teams reviewed networks’ funding requests and made recommendations regarding the amount of supplemental allocations and efficiency initiatives networks needed to implement in order to close gaps between their expected expenditures and VERA allocations. In fiscal year 2001, responding in part to criticisms of network staff review of allocation requests, VA replaced the team review process with a review by VA headquarters officials. In this process, VA headquarters officials reviewed requests for supplemental resources from networks anticipating a budget shortfall for the year. In fiscal year 2002, VA created a team to examine networks’ need for supplemental resources. This team consisted of headquarters and network officials, including representatives from networks that requested supplemental allocations and those that did not. None of VA’s approaches to supplemental allocations has systematically evaluated the extent to which certain factors caused networks to require supplemental allocations. In fiscal years 1999 and 2000, VA teams conducted site visits and reviewed financial and clinical information that requesting networks provided. The teams made recommendations for supplemental allocations in order to prevent network shortfalls. However, VA could not determine to what extent supplemental resources were needed due to imperfections in VERA, lack of network efficiency, inability to predict complex care patients’ costs, or lack of managerial flexibility to close or consolidate programs or facilities because the teams did not collect the information needed to make this determination. Although the evaluation process changed in fiscal year 2001, VA was still unable to make such a determination. For example, in fiscal year 2001, about half the supplemental resources VA provided to networks was for “inflation and miscellaneous program adjustments.” All networks experienced inflation, however, and VA did not distinguish between the level of inflation in networks that requested supplemental resources and those that did not. VA officials told us that the changes for fiscal year 2002 will still not allow them to determine the extent to which various factors cause networks to need supplemental resources. As a result, VA cannot provide adequate assurance that supplemental allocations are appropriate or take needed action to correct problems that cause networks to have budget shortfalls. One of the corrective actions VA could take is to assist networks that experience budget shortfalls because of an unusually large number of high-cost complex care patients in a given year. This is important because the methods used to predict health care costs are not as precise in predicting the costs of many complex care patients as they are in predicting the costs of many basic care patients. As a result some networks’ budget shortfalls could be explained in part by a higher than expected number of high-cost complex care patients. To address this risk, some other payers have established funding mechanisms to address the costs of these very expensive patients. For example, some state Medicaid programs have used a mechanism called stop-loss or reinsurance to reimburse managed care plans for certain benefits that exceed a specified expense limit. If VA were to use a similar funding mechanism as part of the National Reserve Fund, this could help protect networks from budget shortfalls by providing additional resources above the capitation amount for complex care patients that reach a predetermined level of cost. VA is studying ways to address the risk that a network may have unusually high- cost patients in a given year that are not predicted in a resource allocation model. VERA’s overall design is a reasonable approach to resource allocation and has helped promote more comparable resource allocations for comparable workloads in VA. This approach is reasonable because VERA allocates resources primarily on the basis of workload and attempts to adjust network resources for factors beyond the control of network management, encourage efficiency, and provide protection to patients against network budget shortfalls. The implementation of this approach resulted in VA’s shifting resources to more closely mirror shifts in the veteran population from the northeast and midwest to the south and west. Although VERA’s design is a reasonable approach to resource allocation, VA could correct weaknesses in VERA’s implementation to improve the comparability of resource allocations with networks’ workloads. One of VERA’s implementation weaknesses is that it does not include most Priority 7 veterans in its workload even though the Priority 7 workload now represents about one-fifth of patients served. If the number of Priority 7 veterans VA treats continues to increase, this may create even more serious inequities in the future. VERA’s adjustment for differences in patient health needs across networks emphasizes simplicity at the cost of increased accuracy. Maintaining only three case-mix categories in VERA does not adequately account for important variations in health care needs among networks. Increasing the VERA case-mix categories from three to a higher number would better account for the variation in health care needs across networks and would have the largest effect on resource allocation. In addition, changes are needed to update VERA’s case-mix weights to better reflect how VA health care is now delivered. Updating case-mix weights may begin with using current expenditure data, but additional consideration should be given to using the best available data on appropriate clinical care and efficiency. In addition to these weaknesses, VA has not used the supplemental funding process for improving VERA allocations and management of VA’s resources. Although the amount of resources provided to networks through the supplemental funding process has continued to increase, VA has not been able to determine the relative contribution of factors, such as imperfections in VERA, network inefficiency, inability to predict complex care costs, or lack of managerial flexibility to close or consolidate programs or facilities, to the need for supplemental resources. An important factor that other health care payers have identified and account for that may contribute to VA network budget shortfalls is the inability to accurately predict the cost of complex care patients. Other payers have addressed this risk by using a funding mechanism to partially offset the unanticipated costs of such patients. Because VA has not identified the relative contribution of this factor and other factors that could cause network budget shortfalls, VA is unable to provide assurance that the supplemental funding is appropriate or take needed action to correct problems that cause networks to have budget shortfalls. Making changes to address weaknesses in VERA will add some complexity to how VA allocates resources. Doing so, however, will better align the allocation of approximately $200 million with workload. To continue to improve the allocation of comparable resources for comparable workloads through VERA, we recommend that the secretary of veterans affairs direct the under secretary for health to: better align VERA measures of workload with actual workload served regardless of veteran priority group, incorporate more categories into VERA’s case-mix adjustment, update VERA’s case-mix weights using the best available data on clinical determine in the supplemental funding process the extent to which different factors cause networks to need supplemental resources and take action to address limitations in VERA or other factors that may cause budget shortfalls, and establish a mechanism in the National Reserve Fund to partially offset the cost of networks’ highest cost complex care patients. In comments on a draft of this report, VA agreed with our conclusions that VERA’s design is a reasonable approach to allocate resources commensurate with workloads and that VERA, in concert with other VA initiatives, has provided an incentive for VA to serve more veterans. VA also acknowledged the opportunities for improvements in VERA’s implementation that we identified and concurred with our recommendations. VA’s comments are in appendix II. VA concurred with our workload and case mix recommendations, recognizing the substantial trend in Priority 7 workload expansion and the case-mix limitations of having only three pricing groups within VERA. VA anticipates that the distribution of an expected fiscal year 2002 supplemental appropriation will consider the Priority 7 workload, but as of February 22, 2002, Congress has not provided VA with the supplemental appropriation it anticipates. Further, VA is evaluating the appropriateness of expanding the number of VERA price groups to include corresponding updates of case-mix weights, but will not make a decision about these potential fiscal year 2003 VERA modifications until September 2002. In its comments, VA also indicated that it plans to wait for further study of VERA’s workload and case-mix measures to determine whether all Priority 7 workload and case-mix refinements should be incorporated in the fiscal year 2003 VERA model. Given the extensive study of most of these issues already conducted by VA and others, we encourage VA to implement our recommended VERA workload and case-mix improvements in its fiscal year 2003 allocations to networks and to further refine these improvements in the future as needed. Delaying these needed improvements to VERA means that approximately $200 million will be allocated annually in a manner that does not align workload and resources as equitably as possible among networks. VA also concurred with our recommendation to determine in the supplemental funding process the extent to which different factors cause networks to need supplemental resources, but the actions VA discussed to improve the supplemental funding process do not address our recommendation. VA used a new supplemental adjustment process in fiscal year 2002 to better identify different factors that cause networks to require supplemental resources. However, this process does not identify the root causes of a network’s need for additional resources as we recommended. Specifically, VA’s new supplemental process does not provide VA information on the relative contributions of specific factors to network shortfalls such as network inefficiency, imperfections in VERA, and the inability to predict complex care costs. Until VA implements our recommendation, it cannot provide assurance that supplemental resources are appropriate or take needed actions to reduce the likelihood of network shortfalls in the future. In addition, VA’s discussion of actions for establishing a mechanism in the National Reserve Fund to partially offset the cost of networks’ highest cost complex care patients do not fully address our recommendation. VA stated that the resources it distributed to five networks through the fiscal year 2002 supplemental adjustment process are expected to meet these networks’ supplemental funding needs, including the cost of their highest cost patients. To address our recommendation, however, VA would have to identify individual complex care patients with unexpectedly high costs over the course of the fiscal year and provide stop-loss coverage for such patients to each network. VA’s current process does not do this. However, as we have noted, ongoing VA studies could develop ways to provide stop loss coverage to networks for unpredictable high-cost complex care patients. Until VA establishes such a funding mechanism, some networks may experience budget shortfalls as a result of these unpredictable complex care costs. We are sending copies of this report to the secretary of veterans affairs, interested congressional committees, and other interested parties. We will make copies of the report available to others upon request. If you or your staffs have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix III. We reviewed the Department of Veterans Affairs (VA) resource allocation for fiscal years 1997 through 2001 to (1) describe the effect the Veterans Equitable Resource Allocation (VERA) system has had on network resource allocations and workloads, (2) assess whether VERA’s design is a reasonable approach to resource allocation, and (3) identify weaknesses in VERA that may limit VA’s ability to allocate comparable resources for comparable workloads. We worked with VA officials from the Resource Allocation and Analysis Office to obtain documents and data on how VERA works and how VERA has changed since fiscal year 1997. We also relied on other VA officials for our assessment of VERA including officials from the Office of the Under Secretary for Health, the Office of the Assistant Deputy Under Secretary for Health, the Office of the Chief Financial Officer, the Office of Quality and Performance, the Office of Policy and Planning, the Spinal Cord Injury Strategic Healthcare Group, the Geriatrics and Extended Care Strategic Healthcare Group, the Mental Health Strategic Healthcare Group, Health Services Research and Development Service, and the Northeast Program Evaluation Center. In addition, we interviewed officials from veteran service organizations and payment system experts outside of government. We obtained information on VERA’s effect and how it could be improved through interviews and documents from VA’s networks. We visited five network offices: Albany (2); Bay Pines (8); Bronx (3); Lincoln (14); and Minneapolis (13) and conducted telephone interviews with officials in three additional networks: Denver (19), Kansas City (15), and Phoenix (18). We chose these networks because they were geographically diverse and had different financial experiences under VERA. We also obtained information from network officials about potential improvements to VERA including adding basic vested care Priority 7 users, basing VERA case-mix weights on more current expenditures, using more categories to adjust for case-mix differences, and other factors. We had follow-up telephone interviews with network officials in Bronx (3), Minneapolis (13), and Lincoln (14) regarding fiscal year 2001 supplemental resources from the National Reserve Fund. We conducted an electronic mail survey to obtain the input of all 22 network directors about VERA. We also obtained information on network directors’ anticipation of future supplemental funding requests. To assess the reasonableness of VERA as an approach to resource allocation, we performed a literature review using works published primarily within the last 5 years. We searched the following databases: MEDLINE, ABI/Inform Global, and Econlit, and relied on publications from other federal agencies. We focused our search on finding information on similar health care payment systems, case-mix adjustment for acute and extended care populations, and managing risk for mental health and special care populations. To assess the effect VERA has had on network resource allocations and workload, we identified how resources have shifted among regions and increased veterans’ access to care as measured by the number of veterans treated. We calculated the resources VERA shifted by projecting what allocations would have been in fiscal year 2001 if networks received the same proportion of funding in that year that they received in fiscal year 1996, the year preceding VERA’s implementation. We calculated the resource shifts by subtracting networks’ actual 2001 VERA allocations from their projected allocations. To calculate the total amount of resources VERA shifted, we summed the absolute value of networks’ net gains or losses and divided by two. We divided the total by two to avoid double counting because a dollar transferred to one network is the same dollar transferred from another. To determine weaknesses in VERA, we first examined our more than 10 years of work reviewing VA’s resource allocation processes. In addition, we relied on external evaluations of VERA completed by Price Waterhouse, LLP and The Lewin Group, Inc., AMA Systems, Inc., and the RAND Corporation. We constructed simulation models to estimate the effect in fiscal year 2001 of 1) funding basic vested care Priority 7 patient users, 2) basing the case-mix weights on current expenditures, 3) using all 44 VERA patient classes to allocate network resources, and 4) the result of combining all three of these simulations. VA provided workload and expenditure data from fiscal years 1996 through 2000 and the actual VERA allocations received by networks during fiscal years 1997 through 2001. These data were obtained from VA’s Office of the Chief Financial Officer, Allocation Resource Center, and Office of the Assistant Deputy Under Secretary for Health. To estimate the effect on VERA allocations of funding basic vested care Priority 7 patients, we used the total unduplicated number of basic vested care Priority 7 veterans served for fiscal years 1997, 1998, and 1999. In addition, we assumed that these patients would be funded at 50 percent of the national average cost or $849 in fiscal year 1999. In our simulation, we chose to fund them at 50 percent of the national average cost based on documentation from a prior recommendation by the Veterans Health Administration Policy Board. Funding these patients at less than full cost lessens the incentive for networks to serve more Priority 7 veterans. To estimate the effect on network allocations of basing case-mix weights on current expenditures, we compared VERA’s fiscal year 2001 allocations made on the basis of fiscal year 1995 expenditure data to what fiscal year 2001 allocations would have been if they were based on fiscal year 1999 expenditure data. The fiscal year 1999 expenditure data were the most recent available for case-mix weight calculations for the fiscal year 2001 VERA allocation. Using fiscal year 1999 data, we computed new capitation amounts for basic non-vested care, basic vested care, and complex care. To estimate the effect on network allocations of using all 44 VERA patient classes, we used fiscal year 1999 expenditures based on VERA workload. To calculate new capitation amounts, we first calculated the percent of expenditures spent on each of the 44 classes. Second, we calculated the amount of resources available for each class by multiplying the new percentages for each class by the total fiscal year 2001 resources that VERA allocated. Third, we calculated the new capitation amounts by dividing the amount of resources available by the corresponding VERA workload. To estimate the combined effect on network allocations of making each of these changes, we calculated capitation amounts for each of the 44 classes and funded basic vested care Priority 7 users at 50 percent of the national average cost based on fiscal year 1999 expenditures related to VERA workload. In our simulation we created a separate category for basic vested care Priority 7 patients because we did not have data on Priority 7 veterans for each basic care patient class. We tested VA computer-based data used in our analysis and concluded that it was adequate for our purposes. To do this, we assessed the reliability of workload (VERA and non-VERA) and expenditure data we obtained from VA that were used in our analyses. When we identified inconsistencies between databases, we tried to resolve them by interviewing officials responsible for creating or maintaining the databases, updating the databases with additional information VA provided, and requesting special data runs with parameters that we specified. In addition, we confirmed that VA’s Allocation Resource Center verifies all workload and expenditure data used in the VERA allocation process. We did not, however, verify whether these processes were adequate. We relied on previous work to determine what limitations, if any, VA’s data may have had on the analyses we completed. We performed our review from October 2000 through December 2001 in accordance with generally accepted government auditing standards. In addition to the contact named above Marcia A. Mann, Jacquelyn T. Clinton, Thomas A. Walke, Diana Shevlin, Maria Vargas, Leslie D. Blevins, Deborah L. Edwards, and Susan Lawes made key contributions to this report. Medicare Managed Care: Better Risk Adjustment Expected to Reduce Excess Payments Overall While Making Them Fairer to Individual Plans. GAO/T-HEHS-99-72. Washington, D.C.: February 25, 1999. Medicare Managed Care: Payment Rates, Local Fee-for-Service Spending, and Other Factors Affect Plans’ Benefit Packages. GAO/HEHS-99-9R. Washington, D.C.: October 9, 1998. VA Health Care: More Veterans Are Being Served, but Better Oversight Is Needed. GAO/HEHS-98-226. Washington, D.C.: August 28, 1998. VA Health Care: Resource Allocation Has Improved, but Better Oversight Is Needed. GAO/HEHS-97-178. Washington, D.C.: September 17, 1997. Veteran’s Health Care: Facilities’ Resource Allocations Could Be More Equitable. GAO/HEHS-96-48. Washington, D.C.: February 7, 1996. VA Health Care: Resource Allocation Methodology Has Had Little Impact on Medical Centers’ Budgets. GAO/HRD-89-93. Washington, D.C.: August 18, 1989. VA Health Care: Resource Allocation Methodology Should Improve VA’s Financial Management. GAO/HRD-87-123BR. Washington, D.C.: August 31, 1987.
The Department of Veterans Affairs (VA) spent $21 billion in fiscal year 2001 to treat 3.8 million veterans--most of whom had service-connected disabilities or low incomes. Since 1997, VA has used the Veterans Equitable Resource Allocation (VERA) system to allocate most of its medical care appropriation. GAO found that VERA has had a substantial impact on network resource allocations and workloads. First, VERA shifted $921 million from networks located primarily in the northeast and midwest to networks located in the south and west in fiscal year 2001. In addition, VERA, along with other VA initiatives, has provided an incentive for networks to serve more veterans. VERA's overall design is a reasonable approach to allocate resources commensurate with workloads. It provides a predetermined dollar amount per veteran served to each of VA's 22 health care networks. This amount varies depending upon the health care needs of the veteran served and local cost differences. This approach is designed to allocate resources commensurate with each network's workload in terms of veterans served and their health care needs. GAO identified weaknesses in VERA's implementation. First, VERA excludes about one fifth of VA's workload in determining each network's allocation. Second, VERA does not account well for cost differences among networks resulting from variation in their patients' health care needs. Third, the process for providing supplemental resources to networks through VA's National Reserve Fund has not been used to analyze how the need for such resources is caused by potential problems in VERA's allocation, network inefficiency, or other factors.
The West Valley site, about 30 miles southeast of Buffalo, includes an approximately 200-acre area of nuclear operations within a 3,300-acre area owned by the state of New York. (See fig. 1.) The facility began construction in 1963 as the first—and ultimately the only—commercial spent fuel reprocessing plant to be operated in the United States. A firm called Nuclear Fuel Services operated the plant, which reprocessed spent fuel from 1966 to 1972. Regulated by the Atomic Energy Commission (predecessor to NRC), the plant reprocessed approximately 640 metric tons of spent nuclear fuel to recover usable uranium and plutonium. In 1972, the plant was shut down to meet regulatory changes, including more stringent seismic criteria and worker safety requirements. In 1976, facing rising estimates of the cost to modify the plant to meet the new safety requirements, the operator announced its withdrawal from the business. (A time line of historical and projected West Valley milestones is presented in app. II.) The commercial reprocessing era at West Valley left behind major environmental, safety, and health risks from multiple types of nuclear contamination at the site, including high-level wastes, radioactive buried wastes, and environmental contamination. Specific on-site radiation risks that were generated then and still exist include the following: The reprocessing building—significantly contaminated with strontium-90 and cesium-137 (both potentially carcinogenic radionuclides)—and four adjacent single-shell underground storage tanks encased in concrete vaults. These tanks originally contained about 600,000 gallons of liquid, high-level wastes generated during reprocessing. A 5-acre, NRC-licensed waste disposal area, used from 1966 to 1986. This area contains several types of buried wastes resulting from the reprocessing era, such as about a third of a cubic meter of spent fuel from Hanford’s N-Reactor; this spent fuel was buried instead of being reprocessed because the outer layer of a fuel assembly was ruptured. A storage pool originally containing several hundred spent nuclear fuel assemblies, and now containing 125 assemblies. Groundwater contamination under the reprocessing building, in the form of a plume of strontium-90 that first developed during 1968 to 1971 and was identified in 1994. Contamination in the form of cesium-137 in surface soils on- and off-site, resulting from airborne releases, identified as principally occurring in 1968. The releases were caused by ventilation failures in the plant’s main stack. The cesium contamination levels are only slightly distinguishable from background radiation levels. The contamination extends about 3.7 miles northwest from the plant stack into heavily wooded off-site areas. An inactive 15-acre, state-licensed and -managed commercial low-level radioactive waste disposal facility. This facility, which operated from 1963 to 1975, contains, among other wastes, highly radioactive wastes from naval and commercial reactors and nuclear fuel processing facilities that are buried in trenches, as shown in figure 2. The West Valley Demonstration Project Act, enacted to assist in the cleanup of the facility, was signed into law in October 1980. The act required DOE to, among other things, (1) solidify and develop suitable containers for the site’s high-level radioactive wastes; (2) transport the solidified waste to a permanent repository; and (3) dispose of the low-level and transuranic wastes created during the project. In cooperation with the state’s Energy Research and Development Authority, DOE took control of project operations in 1982. The West Valley Act and an implementing cooperative agreement divided projected operating costs between DOE (90 percent) and the state (10 percent). West Valley Nuclear Services, Inc. (now under Washington Group International, Inc.) was awarded the solidification project contract and remains the primary contractor. In carrying out its responsibilities under the act, DOE has constructed the solidification facility and conducted solidification operations—referred to as vitrification. These operations have involved (1) chemically treating the high-level wastes—a step called pretreatment—to separate out voluminous less-radioactive wastes (which are then stored as low-level wastes) and (2) mixing the remaining high-level wastes with a form of molten glass and pouring the mixture into cylindrical stainless steel storage canisters. (The canisters are shown in fig. 3.) As vitrification nears completion, DOE and the New York State energy authority are shifting their focus to the remaining cleanup tasks—decontaminating and decommissioning structures, remediating soil and groundwater, and removing nuclear wastes stored and buried on-site, among other activities. Various entities oversee West Valley under several statutes. The site was originally licensed to the operator and New York State by the Atomic Energy Commission and subsequently by NRC. For the duration of DOE’s presence, the NRC license to the state has been placed in abeyance, leaving DOE, as authorized by the Atomic Energy Act, to regulate radioactive materials at West Valley, as it does at other departmental facilities. After DOE concludes its on-site tasks, the site is to be turned back over to the state and the NRC license is to be reinstated and/or terminated following decommissioning. Until then, under the terms of the West Valley Act and a 1981 memorandum of understanding with DOE, NRC is to provide informal review and consultation and is authorized to prescribe decontamination and decommissioning criteria for the site. West Valley must also comply with the National Environmental Policy Act of 1969 (NEPA), which requires integrated environmental planning leading to the choice of a preferable cleanup alternative, and a 1987 Stipulation of Compromise Settlement with the Coalition on West Valley Nuclear Wastes and the Radioactive Waste Campaign, which resulted from litigation concerning DOE’s on-site disposal of wastes generated by the project. The stipulation required DOE to conduct a full environmental impact study under NEPA, instead of the less detailed environmental assessment the Department had considered sufficient. Additionally, EPA and the state’s Department of Environmental Conservation have oversight responsibilities at the site. For example, under authorization from EPA, the state regulates radioactive air emissions under the Clean Air Act and the hazardous components of radioactive mixed wastes under the Resource Conservation and Recovery Act of 1976 (RCRA). DOE has almost completed vitrifying the high-level wastes at West Valley, overcoming numerous technological challenges along the way. Vitrification has enhanced the site’s environmental, safety, and health status, and on the basis of our examination of DOE data and reports, as well as interviews with interested parties, the Department has generally operated the facility safely. However, the cleanup could take four more decades, including more than two decades of major additional cleanup work that still needs to be performed, and additional time for interim on- site storage of vitrified high-level wastes. In the near term, various wastes need to be managed and structures need to be decontaminated. In the longer term, depending on the cleanup level chosen for the site, these structures need to be torn down and either removed off-site or left in place and capped, and the site needs to be decommissioned. DOE’s operations at West Valley began in 1982 and included the construction of a vitrification facility from 1985 to 1995. From the late 1980s into the mid-1990s, waste pretreatment, sludge washing operations, and vitrification testing took place. As we reported in 1989 and 1996, construction was subject to delays and cost overruns early on. During pretreatment (1988-95), about 1.7 million gallons of low-level waste were generated and placed into almost 20,000 drums in an on-site storage area. (See fig. 3.) Pretreatment reduced the waste volume to be vitrified by over 80 percent. Vitrification operations began in 1996. They are now nearing completion, which is scheduled for September 2002. To date, the four on- site underground high-level waste tanks have been emptied of over 99 percent of their long-lived radioactivity in tank sludge, as well as 95 percent of their cesium-137 activity. To date, 255 stainless steel, cylindrical waste canisters have been filled with vitrified high-level waste. Vitrification of the remaining traces of wastes is continuing. Tank sludge, known as “tank heel,” is being removed from the tank bottoms (which have an intricate, grid-like internal support structure). In removing the liquid, high-level wastes from the underground tanks and vitrifying them, DOE has overcome numerous technological challenges. Technological successes related to West Valley vitrification have included (1) developing a separation process for pretreating the wastes (an ion exchange method, using titanium-treated zeolite for separation, which was developed at the Pacific Northwest National Laboratory); (2) developing tank liquid mobilization pumps that would work in a highly radioactive environment (adapted from a Savannah River Site design); (3) implementing a glass melter technology developed by the Pacific Northwest National Laboratory for use at West Valley; and (4) developing a canister waste-level monitoring system using infrared detection—a system adopted at Savannah River. The West Valley and Savannah River melter technologies have subsequently been considered for low-level waste vitrification efforts being planned at Fernald, Ohio; Savannah River; Hanford; and Oak Ridge, Tennessee. West Valley’s vitrification operations are part of a multibillion-dollar DOE effort to immobilize its liquid, high-level wastes at other, larger sites— including Savannah River, Hanford, and the Idaho National Environmental and Engineering Laboratory. West Valley and Savannah River are currently vitrifying their wastes, while the efforts at Hanford and the Idaho Laboratory—whose solid-form wastes, stored in bins, will be processed differently—are not as far along. The West Valley, Savannah River, and Hanford vitrification efforts differ in technical details, including methods of pretreatment. Vitrification at Savannah River could continue until the mid-2020s, according to DOE. We reported in 1999, however, that Savannah River was having difficulties with its chosen pretreatment technology. Pending resolution of this matter, the site has been restricting its vitrification efforts to the sludge in its tanks. At Hanford, DOE’s plans call for vitrification operations to begin in the late 2000s and continue until the mid to late 2010s for 10 percent or more of the high-level wastes, and an undetermined longer period for the rest. According to the federal and state oversight officials and local officials we contacted, DOE has generally operated the site safely. In addition, available DOE environmental and safety monitoring data and oversight reviews for West Valley (from 1990 to 2000) do not indicate a pattern of environmental, safety, or health issues. During pretreatment and vitrification operations, DOE has not reported serious exposures to radioactivity of on-site workers, although a few incidents DOE judged to be noncritical have put workers at risk of such exposure, according to DOE and NRC records. For example, in November 1996, radioactive waste migrated into a pipe intended for demineralized water at the vitrification facility; in December 1997, two workers came into contact with radioactive waste that went onto the ground in the area of the waste tanks; and in August 1999, radioactive liquids entered pipes intended to indicate fluid levels. As reported, and according to DOE officials, none of these incidents caused a significant loss in work time, and all were aggressively investigated. The site was given a departmental award in February 2000 for excellence in occupational safety and health protection. Off-site contamination at West Valley was generally within regulatory limits in the 1980s and 1990s, according to DOE. Surface water and sediment downstream from the site in Buttermilk and Cattaraugus Creeks have not shown elevated contamination from DOE activities, according to the Department. These creeks carry groundwater and surface water from the site, through nearby Seneca Nation of Indians lands to Lake Erie (about 35 miles distant), and eventually over Niagara Falls. Despite the progress made, decades of major cleanup work remain at West Valley, including waste management, decontamination, and decommissioning. In the near term, structures previously used for reprocessing operations and currently used for vitrification operations need to be decontaminated. In the longer term, into the mid-2020s, depending on the agreed-upon cleanup level for the site, these structures need to be torn down and either removed off-site or left in place as radioactive rubble—prospectively encased in a long-lasting protective cap. As currently projected by DOE, on-site storage of vitrified high-level wastes is to continue for another decade beyond the mid-2020s, after which the site is to be decommissioned according to NRC criteria and closed. Under current DOE plans, specific actions include the following: Shutting down the vitrification facility. This process includes melter deactivation, equipment and piping removal, and decontamination, and may extend to about 2017. Placing into on-site storage and maintaining the high-level waste canisters pending permanent disposal. On-site canister storage could extend to 2036 through 2040 (followed by site closure in 2041). Decontamination and decommissioning, shipping waste, and completing various on-site tasks required by the West Valley Act. For example, low-level wastes are being shipped off-site, possibly until 2022, and on-site transuranic wastes are to be addressed (including potentially shipping the wastes to a receiver site) from 2003 to 2021. Removal of spent fuel elements stored on-site. The fuel, in the form of 125 assemblies, is to be shipped to the Idaho Laboratory in 2001 so that deactivating the storage facility at West Valley can occur during 2001 to 2005. Some of these cleanup actions cannot be implemented without further technological advances. According to DOE, at least 50 innovative technologies are being pursued in connection with the West Valley cleanup in the following five areas: cleaning up vitrification equipment, including the melter; detecting and characterizing radioactive constituents—for example, in waste containers and wastewater discharge; treating and disposing of waste, including, for example, developing alternate transportation systems for transuranic wastes; remediating subsurface contamination, including, for example, developing a permeable barrier and construction techniques to address the on-site groundwater plume; and decontaminating and decommissioning facilities, including, for example, reducing massive metal structures to a smaller size. Specific needs related to cleaning up the vitrification facility have included a remote-handled tooling system to segregate, reduce in size, characterize, and package radioactively contaminated metal materials that have been removed from the facility. A system to perform this task has been in operation since July 1999 and is a first step toward a larger, remote- handled waste facility for the site. This larger facility will be needed to conduct comparable tasks for larger equipment and materials in the vitrification facility and in the tank area. A West Valley official said that additional technologies would need to be developed if the agreed-upon cleanup level and end state for West Valley were to require that the underground tanks, buried highly radioactive wastes, and spent fuel on- site are to be dug up and removed from the site. Attempts to clean up West Valley are being hindered by several factors. First, and most importantly, DOE and New York State continue to disagree on which entity is principally responsible for exercising long-term operational stewardship of the site under the West Valley Act, which entity should pay the site’s prospective high-level waste disposal fees, and what the site should look like in the future. Their differences are key to facilitating long-term progress and are contributing to delays in environmental planning milestones for the site. Specifically, because the parties to the cleanup have not yet agreed on strategic issues affecting the site’s cleanup—that is, what the site is to look like after the cleanup is completed, how the land is to be used, and what regulatory cleanup standards are to be used—a final environmental impact statement (EIS) for decommissioning and closing the site has not yet been issued and the scheduled date for a record of decision on a cleanup level has been extended. An early scheduled date was 1997 but is now 2005 and could be extended further. Until recently, DOE and the state had been formally negotiating in an attempt to resolve their differences. As an incentive for agreement, DOE had included a proposal addressing the issue of the payment of prospective multimillion-dollar fees for disposal of West Valley’s high-level wastes at a permanent repository. However, these confidential negotiations broke down in January 2001 without an agreement. Second, prospective NRC cleanup standards—referred to as decontamination and decommissioning criteria—for the cleanup effort are to be issued in 2001, perhaps in the spring. However, these standards as drafted differ from the EPA environmental guidance and standards under CERCLA and the Safe Drinking Water Act (as well as New York State radiation protection guidance) that could be applied on-site. Third, it is uncertain where West Valley’s nuclear wastes are to go, including both high-level and transuranic wastes. Hundreds of millions of dollars in future costs could be at stake in addressing these disposal questions. The principal parties to the West Valley cleanup—DOE as site operator and New York State as site owner—have been attempting to reach an agreement on strategic issues affecting the site’s future in order to facilitate cleanup planning and the timely and cost-effective cleanup and closure of the site. However, to date, they have not reached such an agreement. Their current relationship reflects the fact that, historically, the federal government and the state have continuously differed on who should assume responsibility for the wastes generated by commercial reprocessing at West Valley. For example, in 1980, we reported that interested parties at West Valley were influenced more by their desire to minimize their own responsibilities than by attempting to arrive at the most practical solution. The issue of who will take on-site responsibility is likely to continue for the foreseeable future. Although the West Valley Act does not require that DOE and New York State reach agreement on the site’s future or how DOE will complete the cleanup effort, NEPA encourages interested parties to cooperate in environmental decisionmaking regarding sites such as West Valley. Consequently, it has been DOE’s stated policy to work closely with the state on the West Valley cleanup. Since mid-1999, the two entities have been conducting confidential negotiations on their future roles and responsibilities, particularly in the areas of (1) on-site operational stewardship, (2) future cost-sharing, and (3) an appropriate cleanup level and eventual use for the site. However, in mid-January 2001, these negotiations broke down without an agreement. Afterward, representatives of the two sides agreed that prospective long-term operational stewardship of West Valley’s wastes was a major unresolved issue. In this regard, DOE, as the site operator, prefers a cleanup level that would involve significant remedial efforts but not require removal of all the nuclear wastes off-site in order to achieve unrestricted site use. DOE also foresees a limited operational presence on-site, although one which could still last for decades. Conversely, New York State, as the site owner, appears to prefer that DOE stay on-site operationally as long as nuclear wastes are there (possibly for many more decades). To date, the state has not put forward a preferred cleanup alternative for the site. It has not ruled out the idea of leaving some nuclear wastes on-site, as DOE favors, but has not yet agreed to DOE’s preferred approach. New York State believes (1) the Department needs to do further analysis to demonstrate the adequacy of its favored approach and (2) reaching an agreement is contingent on DOE and the state agreeing on long-term on-site stewardship. The two parties disagree in large part because they interpret the West Valley Act differently and because they have clearly different interests to protect. Specifically at issue is the extent of cleanup activities DOE is required to conduct under the act, as well as the duration of DOE’s obligation to conduct operations on-site to deal with the radioactive contamination in buildings and burial areas resulting from commercial reprocessing operations that preceded the Department’s presence. According to DOE, under the act, New York State, as the site owner, is responsible for the preexisting contamination, and ultimately responsible for addressing land use issues there. DOE plans to limit its on-site decontamination and decommissioning efforts to areas, facilities, and materials used in conducting the waste vitrification project. The Department states that after cleaning up West Valley, it does not become owner of the site. In this regard, DOE foresees a long-term, but ultimately limited, departmental operating role at West Valley, after which it expects to leave the site. In recent years, DOE’s estimates for completing its on- site role have ranged from 2005 to 2041, depending on programmatic and waste disposal assumptions. On the other hand, New York State interprets the West Valley Act to require a more extensive cleanup role for DOE and a longer-term departmental operating presence—that is, as long as any nuclear waste remains on-site. According to the state, DOE is responsible for decontaminating and decommissioning all facilities and wastes in the 200- acre operations area, except for the state disposal area and the materials buried in the NRC-licensed disposal area prior to DOE’s presence. The state asserts that if DOE’s cleanup efforts result in the need for long-term institutional controls on-site, the Department should provide such controls. New York State estimates the federal government is responsible for about 75 percent of the spent fuel reprocessed at West Valley and therefore should rightly stay on-site as a long-term caretaker—if one is needed—for any remaining wastes generated from reprocessing. New York State officials have also said the state does not want responsibility for ensuring the long-term performance of the high-level waste tanks or other DOE-engineered barriers. As in the past, New York State believes that the federal government, in addition to its legal responsibilities, has the necessary technical and financial resources to fully clean up West Valley. DOE and New York State also have historically disagreed on who is responsible for paying the fees that are due if West Valley’s high-level wastes are to be disposed of in a permanent repository. The disagreement is not about who owns the wastes—the two sides agree that they are state owned. At issue is who should pay for disposal and under which laws. Under the Nuclear Waste Policy Act of 1982, nuclear facilities seeking access to a prospective permanent repository must sign a contract for disposal and pay a fee into the nuclear waste fund that was set up to cover the disposal costs. Notwithstanding the provisions of the West Valley Act and its implementing cooperative agreement between the Department and New York State, DOE officials said that, under the Nuclear Waste Policy Act, West Valley’s owner, like the owners of other nuclear facilities, must pay this fee, which covers full disposal costs, prior to having the site’s wastes disposed of in the repository. On the other hand, the state argues that the provisions of the West Valley Act and its implementing cooperative agreement make the signing of a disposal contract under the Nuclear Waste Policy Act of 1982 both inappropriate and redundant. In the state’s view, the Nuclear Waste Policy Act requires payment from a nonfederal party only for the disposal of spent fuel or high-level waste from a civilian nuclear power reactor. According to the state, the West Valley high-level wastes are a unique federal-civilian mixture not covered under the Nuclear Waste Policy Act (or, if covered, are “wastes from atomic energy defense activities” for which DOE is liable). DOE has unsuccessfully pursued the resolution of this matter for many years. In the recent confidential negotiations, the Department offered a proposal concerning the degree to which DOE and New York State would be responsible for paying the fee, in order to give the state an incentive to reach a timely agreement on a proposed cleanup level for the site and to resolve other important issues at the site. According to DOE, under its proposal, (1) to settle all outstanding issues between the Department and the state, the Department would agree to assume a portion of New York State’s responsibility to pay for the disposal of the high-level waste in return for monetary and other valuable considerations from the state and (2) DOE would still have no obligation to take title to and dispose of West Valley’s high-level waste unless New York State enters into a disposal contract under the Nuclear Waste Policy Act and pays the disposal fee. According to DOE officials, the proposal would achieve long-term, multimillion-dollar overall cleanup cost savings for both DOE and the state. Following the recent breakdown of the DOE-New York State negotiations, DOE withdrew the proposal, and it is unclear whether it could be revived. According to DOE, the Department and New York State are exchanging information to help determine when negotiations should appropriately be resumed. The DOE-New York State relationship is key to facilitating the cleanup of West Valley and has been a factor in delaying environmental planning milestones for the site. The differences between the two parties were less important in the past, when on-site cleanup efforts were focused almost entirely on vitrification—a cleanup step favored by all interested parties. However, the parties’ differences have become more prominent in recent years as cleanup planning has turned increasingly toward long-term decommissioning and closure of the facility. Facility decommissioning will require decisionmaking on controversial, unresolved issues, such as prospective off-site high-level waste tank removal versus entombment on- site. The differences between DOE and the state, including their lack of agreement on the site’s future, are affecting the pace of the West Valley environmental planning process under NEPA. Under NEPA, the Department is required to integrate environmental considerations into its planning, and the Department has historically included the state as a joint participant in the environmental analysis for the site. DOE has conducted NEPA compliance efforts for West Valley since the 1980s, but this process still has not resulted in a final EIS for the site or a record of decision on a cleanup level. Specifically, because of a lack of agreement among the parties, including DOE and the state, the draft EIS for cleaning up the site was issued in 1996 without including a preferred cleanup alternative. Instead, it laid out five cleanup alternatives that ranged widely, from limited remedial actions, referred to as “in place stabilization” of the contamination (at costs ranging from about $400 million to about $1.1 billion, depending on the specific option chosen), to more extensive actions, ranging from “on premises storage” of the contamination in new facilities (at a cost of about $3.7 billion) to full cleanup of the site to an unrestricted end state—referred to as the “removal” option (at a cost of about $8.3 billion). To date, none of these alternatives has been selected as preferred, and no final EIS has been issued. The continuing inability of the parties, especially DOE and New York State, to choose among cleanup alternatives for West Valley limits progress with NEPA compliance, as well as overall cleanup planning, and has resulted in changing DOE estimates of when—following issuance of a final EIS—a record of decision for the site could be issued. The estimated date for a record of decision has been extended several times, from October 1997, to May 2000, to the latest estimate of 2005. In retrospect, according to DOE officials at West Valley, the changing estimates indicate overly optimistic past assessments of how difficult it might be for interested parties to decide on a preferred cleanup alternative for the site. They said the 2005 date is a reasonable current estimate, and while it could be marginally accelerated, if at all, it could also be extended if there is no agreement soon on the site’s future. Concerned about potential cleanup delays, DOE has recently chosen to split the EIS development process into two phases, so that near-term post-vitrification cleanup work will not be delayed by NEPA compliance considerations. DOE and New York State officials maintain that their negotiating differences have not yet seriously affected the pace of environmental planning for West Valley or the overall progress of the cleanup. According to DOE headquarters and field officials, this is because, until recently, the Department has been more focused on vitrification than on later phases of the cleanup and is only now turning more attention to decontamination and subsequent decommissioning. Also, according to the Department, its environmental planning for West Valley does not depend on its negotiating efforts with the state, and therefore if no agreement is reached with the state, the Department can proceed with its NEPA compliance efforts without the state’s participation. A DOE official said that difficulties in developing a preferred alternative and the desire to give the public an ample opportunity to comment have been reasons for not including a preferred alternative in the 1996 draft EIS and for not having made it final since then. Departmental officials said that despite the lack of a preferred alternative for West Valley, day-to-day cleanup work is continuing, focusing on nearer-term work steps (such as decontamination of structures) that will be necessary regardless of which alternative is eventually chosen. According to DOE, the Department can complete all of its responsibilities under the West Valley Act even if negotiations with New York State never resume, but a DOE official said that if differences with the state continue in coming years, there could be more serious effects on the overall costs and schedule of the cleanup. In our view, the Department underestimates the degree to which the continuing lack of agreement among the parties— especially DOE and New York State—concerning the site’s long-term future is already limiting the precision and pace of DOE’s cleanup planning for West Valley, as evidenced in lengthy NEPA compliance efforts, frequently changing planning milestones, and uncertain, varying cleanup cost and schedule estimates. Under the West Valley Act, DOE’s cleanup of the facility is to occur in accordance with cleanup standards to be issued by NRC. However, these standards, which are important regulatory criteria for decontaminating and decommissioning the site, have been lacking since the act was passed in 1980. NRC first developed cleanup standards for its licensees, such as commercial nuclear power plants, in 1997. However, these standards (referred to as NRC’s license termination rule) were not designed specifically for West Valley. Prospective standards for West Valley were issued in draft form in December 1999 and are based substantially on the 1997 standards. Following a period of public comment, NRC is now reviewing the draft standards, and NRC officials expect them to be issued in 2001, perhaps in the spring. Such standards—principally including numerical limits on public exposure to any remaining on-site nuclear radiation after the site is cleaned up—are a necessary component of any nuclear cleanup effort. Commonly expressed as millirem of exposure to an individual annually, these limits help to quantify “how clean is clean” at a cleanup site. Like NRC’s 1997 standards, the prospective West Valley standards are to include an exposure limit of 25 millirem a year to an individual from all means of exposure (or “pathways”)—through air, water, and soil on-site at West Valley. Also, according to NRC officials, the standards will likely include higher limits for on-site locations where the level of 25 millirem a year for unrestricted access is not attainable. In such locations, such as burial areas for high-activity wastes, higher limits (100 or 500 millirem a year, depending on the situation) would be applicable, combined with restrictions on public access to these areas. Such a regulatory approach would recognize the need for long-term institutional controls at some locations at West Valley. The timing of the issuance for, and the prospective content of, the West Valley standards have been of concern to interested parties. Such standards were arguably less needed in the 1980s, when the first phase of the cleanup—the high-level waste vitrification project at West Valley—was gearing up. According to the 1981 DOE-NRC memorandum of understanding accompanying the West Valley Act, NRC was to issue the standards after DOE analyzed environmental options for the site. In this regard, DOE’s analyses have been ongoing for at least a decade (including the development of the 1996 draft EIS), and are still under way. The Department has been concerned that NRC may issue final cleanup standards prematurely, before West Valley’s environmental analyses are completed. Specifically, DOE has said that the issued standards could contain restrictions developed on the basis of incomplete environmental analysis that could prevent consideration of potentially cost-effective cleanup alternatives. On the other hand, some observers, such as the Natural Resources Defense Council, have argued that issuance of the NRC standards is long overdue and should not be further delayed because they are needed to help guide cleanup planning and analysis. Some have said the standards should adhere closely to the 1997 decommissioning standards and not include provisions, or “exceptions,” that could circumvent the standards’ protective intent. According to NRC officials, a few years after the final standards for West Valley are issued, prior to a prospective record of decision for the site, the agency plans to (1) review whether DOE applied the standards in developing a decommissioning EIS for the facility and (2) decide whether DOE’s preferred cleanup approach in the EIS meets NRC’s standards. The officials said the evaluation would take into account lessons learned from any further environmental analysis that DOE may conduct in the meantime. Although NRC has standard-setting authority under the West Valley Act, EPA’s environmental guidance and standards—which apply to both chemicals and radionuclides, versus NRC’s radiation-specific standards— could also apply on-site. In this regard, implementation of the West Valley Act does not preclude EPA from exercising its own, potentially more restrictive cleanup authority at West Valley under CERCLA and the Safe Drinking Water Act. While NRC’s standards could be applied on-site during decommissioning, CERCLA could be separately enforced—for example, in response to a citizen’s petition, according to EPA and NRC officials. In regard to groundwater protection, an area of special EPA protective concern, EPA’s approach may be more restrictive than NRC’s and therefore potentially significantly more costly to comply with. In addition, New York State’s Department of Environmental Conservation has issued cleanup guidance that could apply to West Valley. On the basis of its 1987 and 1995 assessments, EPA does not plan to take future remedial actions at West Valley under CERCLA. However, in a May 1999 letter to DOE’s West Valley office, EPA cautioned that cleaning up the site to prospective NRC standards of 25 millirem a year might not adequately protect human health or the environment. In addition, in commenting in January 2000 on NRC’s developing standards for West Valley, EPA called for West Valley’s groundwater to be protected to drinking water standards and for additional site-specific analysis to ensure such protection in the long term. NRC, EPA, and New York State officials have had discussions during 2000 on their different standards and guidance. They have agreed that they need to further explain to DOE how their various criteria and guidance may apply to different locations and activities at West Valley. However, to date, they have not said how their different standards and guidance are to be implemented on-site so as to avoid potential dual regulation. As we reported in 1994 and in June 2000, NRC and EPA have had ongoing differences on cleanup standards. They have recently attempted to resolve the differences through a memorandum of understanding. Their history of disagreement at other NRC-licensed sites indicates that cleanup standards for West Valley could also be disputed, especially with respect to groundwater protection. According to EPA, the two agencies have generally coordinated their regulatory activities effectively at NRC- licensed sites where their standards both apply. However, NRC and EPA have disagreed for many years on this matter and have been attempting for over a year to issue a final memorandum of understanding clarifying their regulatory roles. Such a memorandum could likewise apply to West Valley (an NRC-licensed site whose license is currently in abeyance). As of March 2001, the two agencies were keeping the Congress informed of their efforts but had not completed a final memorandum. Unresolved issues concerning the disposal of West Valley’s high-level and transuranic nuclear wastes may also hinder cleaning up the site in a more timely manner. The vitrified high-level wastes are being temporarily stored in a work room or “cell” in the current vitrification facility (which is part of the former spent fuel reprocessing facility), awaiting further disposition. (See fig. 3.) The transuranic wastes are currently stored at two locations— a building for so-called “lag” storage and the chemical process cell waste storage area (and some were buried in the NRC-licensed disposal area during commercial reprocessing operations). Questions of where these wastes will eventually go, when, and at what cost are still to be addressed. Under the West Valley Act, both types of waste are to be disposed of before the cleanup is completed. If disposal does not happen in a timely manner, their care and maintenance could add substantially to the overall costs and schedule for the West Valley cleanup—potentially hundreds of millions of dollars, with schedule extensions of up to two decades. In 1997, DOE issued a policy—in the form of a programmatic EIS and two records of decision—stating that high-level and transuranic wastes are to remain stored at sites where they have been generated for the foreseeable future, pending a decision on final disposition. Thus, any options for interim off- site storage of West Valley’s high-level and transuranic wastes would require the Department to make an exception to this policy. Off-site removal of West Valley’s high-level wastes could result in hundreds of millions of dollars in potential savings, in part through not having to construct an interim storage facility for the canisters at West Valley. This could be accomplished by removing the wastes to another DOE site for interim storage, followed by later disposal in a permanent repository. Other DOE sites, such as Savannah River, the Idaho Laboratory, Hanford, and the Nevada Test Site, could feasibly accept the West Valley wastes for interim storage, according to DOE officials. They said such a step could result in net cost savings from the elimination of years of storage and maintenance costs at West Valley. Sites such as Savannah River are expected to spend substantial amounts for storage of their own vitrified high-level wastes, beyond which the added costs of storing a relatively few canisters from West Valley are likely to be marginal. Furthermore, a 1997 DOE headquarters analysis estimated cost avoidance of about $770 million over the next 10 years through interim off- site storage of West Valley’s high-level wastes. The analysis assumed that early deployment of a high-level waste shipping system and off-site interim storage of the West Valley wastes would occur as part of an integrated, DOE-wide nuclear waste management effort. However, DOE officials recognized that state compliance agreements, other legal constraints, and political equity considerations among states could preclude taking such an action. DOE’s plans in the 1990s to ship the West Valley canisters to the Savannah River Site at the beginning of the 2000s are a case in point. The canisters could have been added to the larger inventory there on an interim basis, pending removal to a permanent repository. According to various DOE West Valley analyses, shipment would have begun anywhere from 2001 to 2007. The Department presented the option to the Savannah River citizens’ advisory board, which recommended the option be implemented (with some dissenters on equity grounds). In 1999, however, the state of South Carolina halted the plan. According to DOE officials, state officials said DOE had not properly informed them of the plan and the governor opposed it. DOE officials said that on the basis of the recent experience with the state of South Carolina, they have no current plans for interim off- site storage of West Valley’s high-level wastes. With regard to permanent disposal, DOE currently plans to remove the West Valley canisters to a permanent repository. Yucca Mountain, Nevada, is the prospective repository and, if approved, is projected to open in 2010. However, meeting this target date will depend on many technological and political factors. As discussed earlier, not the least of these factors is a timely decision on who—New York State or DOE—should pay the fee for disposal of West Valley’s wastes. Because DOE assumes a pessimistic scenario for prospective disposal of West Valley’s wastes at Yucca Mountain, the Department currently projects that the high-level waste canisters would not be shipped to the prospective Nevada repository until 2036 to 2040, at the end of the time frame projected for disposal there. Current DOE estimates indicate that if the wastes could instead be shipped to permanent off-site disposal in the mid 2020s, up to $100 million in West Valley cleanup costs could be saved. With respect to West Valley’s transuranic wastes. millions of dollars could be saved in disposal costs, depending on which disposal option is chosen.Under the West Valley Act, the transuranic wastes generated as part of project activities are to be disposed of prior to site closure. DOE’s recent plans do not specify a destination, but the latest plans have projected off- site removal of these wastes between 2007 and 2021. Both interim off-site storage and direct shipment to permanent disposal may be options, depending on technological, legal, and political factors, and any of several larger DOE sites could be candidates for interim storage. An existing transuranic waste disposal facility—the Waste Isolation Pilot Project (WIPP) in New Mexico, which has been in operation since 1999— appears to be a feasible permanent destination for West Valley’s transuranic wastes. However, under the authorizing legislation for WIPP, the facility is to receive only transuranic wastes generated in connection with defense-related activities. According to DOE officials, West Valley’s transuranic wastes do not meet this criterion and are considered commercial wastes. Departmental officials said options for gaining access for these wastes to WIPP include seeking an amendment to the WIPP Land Withdrawal Act or an administrative change to recategorize West Valley’s transuranic wastes as defense-related. The basis for such an administrative change would be the fact that the site’s transuranic wastes consist of commingled wastes resulting from spent fuel generated in both commercial and defense nuclear reactors. According to a DOE official, the Department currently favors obtaining a legislative change to gain access to WIPP for West Valley’s wastes, but officials said that seeking an immediate amendment to the WIPP Land Withdrawal Act may be inopportune since implementation of disposal operations at WIPP has only recently begun. The 1997 DOE study on integration opportunities estimated that $13 million in cost avoidance could be achieved over 10 years at West Valley if a significant portion of the site’s remote-handled transuranic wastes could be shipped to off-site locations for interim storage, pending potential WIPP access. This estimate assumed appropriate packaging in large containers for shipment to alternate sites and the implementation of a new transportation package to handle the containers. The same analysis estimated that disposing of all of West Valley’s transuranic wastes at WIPP (assuming access was obtained) could avoid about $4 million in storage and maintenance costs at West Valley. As with high-level waste disposal, state compliance agreements, other legal constraints, and equity issues among states could be factors in any effort to implement an interim storage approach for West Valley’s transuranic wastes. States with facilities that could readily accept such wastes—such as South Carolina and Washington State, for example—do not wish to be perceived as continually receiving transuranic and other nuclear wastes from other states, particularly from states that may have historically carried an arguably lesser share of the overall national burden for disposing of nuclear waste. In states that host DOE’s nuclear facilities, the Department has already invested substantial time and resources in negotiating acceptable arrangements for nuclear waste management, in response to the requirements of the Federal Facility Compliance Act and commitments made to governors. DOE’s estimates of West Valley’s total cleanup costs and a date for completing the cleanup have been uncertain and will remain so until strategic issues are agreed upon, including the extent to which the site is to be cleaned up and what it will then look like, how the land is to be used, what regulatory cleanup standards are to be used, and where the site’s nuclear wastes are to go. DOE’s estimates have shown large cost increases and schedule extensions—as well as variations—since DOE first reported them to the Congress in 1978, as part of congressional deliberations leading to enactment of the 1980 West Valley Act. In 1978, the estimated cleanup cost was $180 million, or about $1.1 billion in year-2000 dollars, with cleanup completion in 1990. These were preliminary estimates, made before the cleanup challenge at the site was fully understood. Estimates in the 1990s have shown considerably greater costs. These cost estimates also have varied by billions of dollars, and the completion schedule by decades, depending on the programmatic assumptions made. DOE’s current estimate of total cleanup costs is about $4.5 billion, with site closure by 2041. The various estimates are listed in table 1. As shown in table 1, the initial cost estimate has more than quadrupled, from about $1.1 billion to about $4.5 billion in the latest estimate, while the initial time estimated to complete the cleanup has increased by about 50 years (from 1990 to 2041). Several factors contributed to these changes. The initial 1978 DOE estimates were preliminary, using available information and experience rather than detailed designs. Furthermore, according to DOE officials, when the initial estimate was made of project costs and cleanup duration, it did not adequately consider the changing environmental landscape for this first-of-a-kind project and did not anticipate the complex regulatory environment and laws that have since come into existence. In addition, as we previously reported, DOE management problems occurred at West Valley in the 1980s, resulting in cost and schedule overruns. As also shown in the table, during the 1990s, the estimated costs for West Valley varied, with totals ranging from $3.8 billion to $5.8 billion. Moreover, different estimates both extended and shortened the estimated schedule, with the estimated increase in the duration of the cleanup ranging from 15 to 51 years. These different totals reflect different, evolving departmental initiatives to quantify the total costs and schedule of the Department’s cleanup effort across the nuclear complex. Causes of variations in the estimates have included different estimation methods and varying major assumptions related to cleanup and nuclear waste disposal. For example, DOE officials said the June 1996 Baseline Report estimates for West Valley were part of a first departmental attempt to quantify the extent of the cleanup problem complexwide, and these estimates were not precise. They were taken from data supporting the site’s 1996 draft EIS and simply averaged the cost of several cleanup alternatives shown in the draft. The July 1996, 1997, and 1998 estimates for West Valley were lower than the Baseline Report estimates, in part because they were based on departmental guidance that called upon DOE’s sites, including West Valley, to use ambitious assumptions aimed at accelerating the cleanup and reducing costs within current budget trends. For example, these estimates assumed an accelerated period of about 10 years to complete the cleanup, off-site interim storage of the high-level waste canisters, and generally flat funding of $123 million annually. Accelerating the cleanup schedule at West Valley without funding adjustments created a substantial planning gap between funding needs and availability within the given time frame. The Department proposed closing the gap through cost savings generated by conducting cleanup projects more efficiently. However, according to DOE West Valley officials, the idea of accelerating the cleanup of West Valley to achieve completion in 2005 was not realistic and could not be implemented. The current estimate of about $4.5 billion with completion in 2041 is based on DOE’s latest cleanup plans for West Valley. DOE officials said this estimate is reasonable, solidly grounded, and the best available based on known information. The estimate, according to these officials, includes opportunities to lower the cost as well as areas that could end up costing more. For example, the current estimate indicates completion of major cleanup tasks by the mid-2020s, and assumes that the high-level waste canisters cannot be shipped to a permanent off-site repository until 2036 through 2040 (with site closure in 2041). According to DOE, although this time frame assumes a lack of earlier access to a prospective permanent repository, such as Yucca Mountain, earlier shipment is a possibility if a valid contract assigning disposal costs can be signed with New York State. Shipping them earlier, such as in the mid-2020s, would lower the total cost of the cleanup. Conversely, some cleanup tasks, such as dealing with the melter used in vitrification, might cost much more than currently estimated because of uncertainty about how to conduct these tasks. DOE officials recognize that the current estimate is uncertain, in part because it does not reflect an agreed-upon cleanup level and site end use. Depending on the cleanup level, on-site cleanup costs could vary widely, as illustrated in the analysis done for DOE’s 1996 draft EIS for West Valley. In the draft EIS, DOE outlined action alternatives ranging from limited remedial actions, referred to as “in place stabilization” of the contamination (at costs ranging from about $400 million to about $1.1 billion, depending on specific options), to more extensive actions such as “on premises storage” of the contamination in new facilities (at a cost of about $3.7 billion), to full cleanup of the site to an unrestricted end state—referred to as the “removal” option (at a cost of about $8.3 billion). A DOE official said that until an appropriate end state for the site is agreed upon, any estimates of total West Valley cleanup costs and completion date will not be entirely credible. The problems DOE faces at West Valley reflect many of the same dilemmas it faces elsewhere in the nuclear complex. West Valley is yet another example of how complicated, uncertain, and subject to cost and schedule changes the cleanup process can be, especially at technologically difficult cleanup sites where an appropriate cleanup level and land use have not been agreed upon and multiple types of contamination are involved. In such circumstances, planners find it difficult to estimate with a reasonable degree of certainty an individual cleanup project’s overall costs and schedule. By extension, DOE’s ability to quantify with a degree of certainty the costs and timetables for the cleanup across the entire complex is to some degree in question—especially at other, larger DOE sites that also lack fully agreed-upon cleanup levels and/or end states. With regard to nuclear waste disposal, West Valley is part of an approaching national decision on what to do with the over 200 underground tanks across the DOE complex and the traces of high-level wastes left in them after vitrification. Are the tanks to be dug up, using technologies that are still to be developed and that potentially require significant expenditures, and removed to an as-yet-undetermined disposal location, or can they be safely left in place and under long-term stewardship? The Natural Resources Defense Council is currently challenging in court DOE’s waste management order that could permit a tank “entombment” strategy to be implemented at West Valley and elsewhere. Since the late 1980s, DOE has been committed to estimating total cleanup costs and schedules complexwide. Such estimates are potentially useful to the Department in planning for over 300 cleanup projects at its over 100 nuclear sites. The estimates are also useful to the Congress in fulfilling its oversight responsibilities, and they help to inform the public about the status of the cleanup program. These estimates have grown over time as more is learned about the number of sites contaminated and the types of contamination. However, as we have previously reported, these estimates have varied considerably, and their reliability has been questioned. In April 1999, we reported that the uncertainty of DOE’s estimates of the cost and schedule for the complexwide cleanup was a matter of concern and depended on various programmatic assumptions. Such assumptions may include funding levels, the facilities and wastes that are to be included in the scope of the analysis, the availability of waste disposal options, or other factors. West Valley’s recent widely varying cost and schedule estimates call into question DOE’s estimates at other sites, especially those that lack agreed- upon cleanup levels and land uses. Many sites across the complex lack a final agreement with their regulators, such as EPA and the state, on the cleanup levels that must be achieved—that is, “how clean is clean.” Furthermore, two of the largest cleanup sites in the complex, Savannah River and Hanford, have long-term cleanup goals that have been less than completely defined. Hanford has a land use plan, but cleanup levels and disposal standards remain to be established, and Savannah River has a comprehensive site use plan, but land uses could change significantly as they are further considered by interested parties. Moreover, like West Valley, both sites face decisions on high-level waste disposal and the disposition of their on-site underground storage tanks. The disposition of these tanks—51 at Savannah River and 177 at Hanford—remains a multibillion-dollar cost uncertainty. The estimated total costs at these two sites alone will likely dominate DOE’s cleanup program for the foreseeable future because they account for a major part of the cost of the entire program. (In 1998, Hanford’s total costs were estimated at about $50 billion and Savannah River’s at about $30 billion, compared with a then- estimated complexwide cost of $147 billion.) On a complexwide basis, DOE’s cleanup cost and schedule estimates are likely to be revised as more becomes known at many sites about the levels of cleanup that must be reached and the technologies to be used. In this regard, the Department has made some recent strides in improving the quality of its annual estimates of the costs and schedule for cleaning up the complex. As we reported in 1998, DOE has called upon field offices to provide more information on (1) the range of potential site cleanup options for sites whose cleanup levels are uncertain and (2) long-term maintenance and surveillance costs for sites that have been cleaned up. The latest estimate, about $198 billion, is based on a range of from $184 billion to $212 billion. According to DOE, the range reflects uncertainties recognized in the estimate and better communicates the uncertainties of projects that are innovative and complex. West Valley also illustrates some of the dilemmas created by DOE’s approach to funding the cleanup across the nuclear complex. DOE’s current estimate for total West Valley cleanup costs is based on maintaining funding for the foreseeable future at current levels—about $107 million a year. This planning approach is referred to as “flat” funding. According to DOE officials, DOE’s Ohio Area Office has implemented the flat funding approach for West Valley and four other nuclear cleanup sites in the region that it oversees. DOE Ohio and West Valley officials said they do not consider the flat funding approach appropriate for West Valley, but they said it is the policy direction of DOE headquarters, on the basis of Office of Management and Budget direction. DOE Ohio and West Valley officials said the Ohio office receives an annual cleanup funding allocation for the five cleanup sites combined, including West Valley. In recent years, these offices have worked within the current “flat” budget estimates while at the same time working to accelerate the cleanup—an ambitious undertaking. Flat funding may not always be cost-effective. In fact, according to DOE officials, the cost profile of cleanup projects is generally not flat: Often, annual costs increase early in the project and are followed by declining costs in later years. As a result, flat funding can add to overall costs and extend the time needed for project completion. Ohio and West Valley DOE officials agreed that flat funding may be a factor in the costs and time required to complete the West Valley cleanup, but they said any extra funds directed to West Valley could reduce the amount of funds directed to one or more of the other sites overseen by the Ohio office. In 2000, a departmental analysis done at West Valley showed that incrementally higher funding for West Valley could help to complete the cleanup faster and with substantial cost savings. Specifically, if the West Valley cleanup could be funded at about $130 million annually from 2006 through 2013, and at $135 million in 2014 and 2015, instead of $107 million for those years, West Valley’s total cleanup costs could decrease by about $509 million and essential cleanup tasks could be completed about 8 years earlier. Funding constraints at West Valley are not unique. They reflect DOE’s funding dilemma across the nuclear complex. Complexwide, the Department has assumed that cleanup work will be funded annually at the same level. This assumption is based on recent appropriations and Office of Management and Budget guidance to promote balanced federal budgets, according to DOE officials. For DOE’s nuclear cleanup program, such an approach can result in a significant gap between the funds needed for the complex cleanup versus the funds available, leading to cleanup delays and cost growth. To illustrate, as we testified in June 2000, projected annual cleanup needs for 2001 through 2010 at DOE’s Paducah, Kentucky, uranium enrichment plant could exceed average annual funding by many millions of dollars. This gap could delay the Paducah cleanup and add to its overall costs. Extended across the complex, the costs multiply. In 1998, DOE estimated a complexwide gap of $3.9 billion from 1999 to 2006 (in 1998 dollars), assuming flat funding of the Department’s cleanup program at $5.75 billion a year. Our 1999 report on DOE’s accelerated cleanup strategy questioned whether DOE sites could achieve the assumed cleanup goals and schedule, given the flat funding assumption. On the other hand, according to DOE, fiscal realities are likely to prevent fully closing the gap between funding needs and available funds. As the first DOE location likely to have all of its on-site high-level waste vitrified, West Valley is a potential early test case on the important issue of tank entombment versus removal. According to DOE plans, a record of decision on the disposition of the site’s high-level waste tanks could be issued in 2005. At West Valley, four tanks are involved, but Hanford and Savannah River, which are also involved in making tank disposition decisions, have a combined total of over 200 tanks. At issue is whether these tanks are to be dug up, at great potential expense, and removed to locations not yet chosen, or whether they can safely be left in place and subjected to long-term stewardship. Tank closure is addressed in DOE orders, as well as in NRC decommissioning requirements and EPA and state of New York RCRA closure requirements. A DOE radioactive waste management order (O435.1) and accompanying manual provide a process that can result in reclassification of high-level wastes, allowing for the possibility of managing the wastes as low-level wastes. This could allow traces of the high-level wastes to remain in place, entombed in the tanks. In the waste management manual, these traces are referred to as “wastes incidental to reprocessing.” With regard to Savannah River and Hanford, NRC has been advising DOE on its methodology for classification and stabilization of incidental waste. In the case of Hanford, NRC recommended three criteria for categorizing the wastes as incidental. Under these criteria, first, the wastes must be processed to remove key radionuclides to the maximum extent technically and economically practical; second, it must be shown that the wastes will be incorporated in a solid form at a concentration that does not exceed applicable concentration limits in applicable regulations (10 C.F.R., part 61); and third, the wastes must be managed pursuant to the Atomic Energy Act to meet safety requirements comparable to the performance objectives in the regulations (10 C.F.R., part 61, subpart C). In the case of Savannah River, NRC in June 2000 approved a more risk- informed and performance-based approach in analyzing DOE’s methodology, principally aimed at satisfying the first and third criteria. For West Valley, NRC is considering whether to deal with the incidental waste issue in its cleanup standards. Dealing with the tanks at West Valley and elsewhere will be costly and challenging. If West Valley follows these criteria and empties the site’s four tanks as completely as technically feasible and at “economically practical” costs, and leaves them in place, such a decision would preclude anything approaching an unrestricted future use for the site. Conversely, according to DOE estimates, if the wastes are removed off-site so that future use of the site can be unrestricted, total cleanup costs for the site could roughly double, to over $8 billion. Moreover, this estimate is very uncertain because technologies for cutting the tanks up and removing them from the ground have yet to be developed. By extension, at Savannah River and Hanford, more extensive technological challenges and broader decisions costing many more billions of dollars are at stake. Any decision on what to do with the tanks will be controversial. Some local interested parties appear to support to some degree DOE’s idea of entombing the West Valley tanks, recognizing that digging them up would be costly, may not be technologically feasible, and would put workers and the public at greater risk of radiation exposure. There is some indication that New York State could agree to a form of tank entombment that would involve something less than an unrestricted land use for the site. However, the state’s Energy Research and Development Authority has said that if incidental waste is to be left at West Valley, DOE should remain on-site to administer long-term institutional controls. Some, including New York State officials, have spoken in favor of the idea of monitored retrievable storage of the tanks. On the other hand, according to the Natural Resources Defense Council, the West Valley Act makes no provision for incidental quantities of high-level wastes to be exempted from permanent off-site disposal. The matter may be resolved in the courts. Currently, the Natural Resources Defense Council is challenging in court DOE’s radioactive waste management order that could permit a tank “entombment” strategy to be implemented at Savannah River and other DOE sites. In addition, according to a DOE official, there could be a legal challenge to any record of decision at West Valley to entomb the site’s high-level waste tanks. Substantial cleanup progress has been made at West Valley, particularly the successful vitrification of the site’s high-level wastes. However, several factors are affecting the costs and pace of the remaining cleanup, and need resolution. In particular, if the differences between DOE and New York State on strategic issues affecting the site’s future continue, including disagreements over their respective roles and responsibilities, they will likely further limit the precision of cleanup planning and potentially add to the costs and schedule for the West Valley cleanup. DOE and the state have spent several years trying to resolve their differing views on their long-term stewardship responsibilities at West Valley, particularly who will pay for permanent disposal of the site’s vitrified wastes, and the extent to which the site is to be cleaned up. The recent breakdown in negotiations, along with the historical federal-state conflict on who should take responsibility for West Valley’s wastes, indicates to us that the two parties simply may not be able to resolve these issues on their own. In addition, the long-standing NRC-EPA disagreement on cleanup levels for NRC- licensed sites could have ramifications for West Valley’s cleanup levels and costs. In June 2000, we raised as a matter for congressional consideration the need to clarify the two agencies’ regulatory responsibilities relating to decommissioning NRC-licensed sites. In this context, specific steps by EPA and NRC to avoid dually regulating West Valley are warranted. Finally, a timely decision about the final disposition of West Valley’s high- level and transuranic wastes could save hundreds of millions of dollars. Because DOE and New York State appear to be unable to reach an agreement on their future responsibilities under the West Valley Act, the Congress should consider amending the act to clarify their responsibilities—especially their respective stewardship responsibilities for historical radioactive contamination left on-site and their financial liabilities for fees that are to be paid for permanent disposal of high-level waste in an off-site repository. To help address NRC’s and EPA’s regulatory responsibilities at NRC- licensed sites, we recommend that, specifically for West Valley, the Chairman, NRC, and the Administrator, EPA, in coordination with New York State, agree on how their different regulatory cleanup criteria should apply to the site. To resolve where West Valley’s high-level wastes should go, once DOE’s and New York State’s stewardship and cost-sharing responsibilities have been clarified, and potentially save hundreds of millions of dollars, we recommend that the Secretary of Energy pursue the timely removal of on- site vitrified high-level wastes, where feasible, either directly to a permanent repository, or to an interim site until a permanent repository is available. To clarify where West Valley’s transuranic wastes should go and potentially save millions of dollars, we recommend that the Secretary of Energy pursue timely removal of the site’s transuranic wastes to an interim off-site storage location, or to WIPP for permanent disposal, as appropriate, either through administrative action or by seeking an amendment to the WIPP Land Withdrawal Act. We provided DOE, the New York State Energy Research and Development Authority, NRC, and EPA with a draft of this report for their review and comment. DOE found the report to be a credible synopsis and assessment of the issues West Valley faces, while New York State concurred with the report’s conclusions that clear radiological requirements, an agreed-upon preferred cleanup alternative, and resolution of nuclear waste disposal issues are critical to the success of the cleanup. However, in their comments, DOE and New York State continued to differ on who should assume ultimate responsibility for the wastes generated by past commercial reprocessing at West Valley. For example, DOE stated that, under the West Valley Act, it does not become the owner of the site and that after site decommissioning it does not envision a continuous on-site presence or long-term operational control there. DOE did say that in the event it leaves wastes behind, in the interest of public health and environmental protection, it would bear at least part of the financial responsibility for monitoring any remedies it had put in place. In contrast, New York State commented that one of the complicating factors at West Valley has been the conflicting interests of the state as site owner and DOE as site operator, and stated that one way to resolve conflicting jurisdictions on-site would be for DOE to agree to assume title and custody of the site pursuant to the Nuclear Waste Policy Act of 1982. Finally, the Department supported our recommendations concerning regulatory cleanup standards and the disposal of transuranic wastes, but disagreed with the recommendation on high-level waste disposal, stating that the Department has no disposal obligation until New York State enters into a disposal contract under the Nuclear Waste Policy Act. In this regard, we have modified the wording of our recommendation to more clearly recognize that resolving the question of responsibility for the high- level wastes is part of any long-term solution regarding their disposal. DOE and New York State also provided technical clarifications on the draft report. NRC’s and EPA’s comments were limited to technical clarifications—NRC’s by letter and EPA’s by e-mail. We incorporated all four agencies’ clarifications in the final report where appropriate. (The DOE, New York State Energy Research and Development Authority, and NRC comment letters are included in apps. III, IV, and V.) As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Honorable Spencer Abraham, Secretary of Energy; the Honorable Richard Meserve, Chairman, Nuclear Regulatory Commission; and the Honorable Christine Todd Whitman, Administrator, Environmental Protection Agency. We will also make copies available to others upon request. As requested, we examined (1) the status of the cleanup, (2) factors that may be hindering the cleanup, (3) the degree of certainty in the Department of Energy’s (DOE) estimates of total cleanup costs and schedule, and (4) the degree to which the West Valley cleanup may reflect, or have implications for, larger cleanup challenges facing DOE and the nation. Specifically, to address the status of the cleanup, we interviewed and obtained documents from several federal, New York State, and local area officials associated with West Valley. Specifically, we spoke with representatives of, and/or obtained documents from, the following agencies: DOE, including the headquarters Offices of Environmental Management, Civilian Radioactive Waste Management, Environment, Safety, and Health, General Counsel, and Inspector General; and the DOE Ohio and West Valley field offices; The Nuclear Regulatory Commission (NRC) and Environmental Protection Agency (EPA), including headquarters and regional officials of both agencies; and New York State’s Energy Research and Development Authority and Department of Environmental Conservation. In addition, we interviewed representatives of, and/or obtained documents from, the Coalition on West Valley Nuclear Wastes, the Citizen’s Task Force on West Valley, the Seneca Nation of Indians, and the Natural Resources Defense Council. To obtain information on past site status, we examined several GAO reports issued since 1977, as well as historical DOE reports. In addition, in order to independently assess DOE’s environmental, safety, and health performance at West Valley, we talked to a range of federal, state, and local officials and examined DOE and NRC safety and oversight reports. In addition, we examined DOE data on West Valley in several departmental databases related to environmental, safety, and health matters. To address factors that may be hindering the cleanup, we interviewed and/or obtained documentation from representatives of many of the above-listed federal, state, and local agencies and other interested parties. Using this documentary and testimonial evidence, we examined in particular the pace of the National Environmental Policy Act’s compliance process at West Valley, as well as matters at issue in negotiations between DOE and the state of New York on their responsibilities for the site. Our review was limited in that these negotiations were and continue to be considered confidential between the two parties. As a result, while we had access to various details of the negotiations, this report does not fully describe the negotiating positions of the two parties. Additionally, we documented the status of NRC’s development of cleanup standards for the site, as well as the current status and potential future disposition of the site’s high-level and transuranic wastes. To address the degree of certainty in DOE’s cleanup cost and schedule estimates, we interviewed DOE headquarters, Ohio, and West Valley officials and obtained documentation from them. To compare DOE’s cost estimates to clean up the West Valley site that were made at different times since 1978, we converted the estimates of future costs to year-2000, present value dollars, using a 5.5-percent discount rate (i.e., the U.S. 30- year Treasury bond rate at the time of our conversion). For all cost estimates except the 1978 estimate, we used annual cost data (annual cost data for the 1978 estimate was not given) to make the conversion process more precise. To further obtain meaningful comparisons, we added historical annual costs to any DOE estimate that did not already include these costs, and future valued (i.e., escalated) all historical costs to year 2000 dollars using the actual U.S. 30-year Treasury bond rate for the respective year of each estimate. For the 1978 estimate, we future-valued the lump-sum amount to year-2000 dollars, using an 8.5-percent rate (i.e., the actual 1978 30-year U.S. Treasury bond rate). Because the 1978 estimate was a lump sum, its conversion to year-2000 dollars slightly biases upward the resulting year-2000 cost estimate, thereby reducing the estimated increase of the other cost estimates above the 1978 estimate. To address the degree to which the West Valley cleanup may reflect, or have implications for, larger cleanup challenges facing DOE and the nation, we compared our analysis of West Valley with analyses we and others have performed of DOE’s environmental management and nuclear waste disposal programs. We used this comparison to develop observations about West Valley’s cleanup in context with the cleanup challenges at other DOE sites. We performed our review from June 2000 through April 2001 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the letter dated April 13, 2001, from the Acting Assistant Secretary for Environmental Management, Department of Energy. 1. We agree that the West Valley Act does not require DOE and New York State to reach an agreement on the overall future of the site or how DOE should complete its responsibilities there. We also agree that the National Environmental Policy Act (NEPA) encourages DOE and the state to cooperate on environmental decisionmaking. Accordingly, wording in the final report has been clarified. Furthermore, we believe DOE’s stated policy of cooperation with the state in addressing strategic issues related to the West Valley cleanup—and its specific pursuit of negotiations with the state—is a preferable course of action as well as key to progress with the cleanup. Nevertheless, because DOE and the state appear to be unable to reach agreement on these strategic issues, we have raised the matter of clarifying their on-site responsibilities for congressional attention. 2. We agree that DOE does not become the site owner after the cleanup is completed, and wording has been clarified in the final report to reflect DOE’s views. However, we believe DOE’s ongoing and prospective cleanup tasks, as the Department views them under NEPA and the West Valley Act, are inevitably related to West Valley’s overall future—its ultimate end state and land use. For example, if DOE’s mandated tasks are to involve leaving the high-level waste tanks in place, this could preclude achieving an end state for the site that would permit unrestricted land use. Considering this, we believe it was appropriate that DOE and New York State, in their recent unsuccessful negotiations, attempted to reach agreement on the site’s overall future—in the form of a preferred cleanup alternative or “vision” for the site. 3. We have clarified wording in the final report to reflect DOE’s views. Nevertheless, from reading both DOE’s and the New York State Energy Research and Development Authority’s comments on our draft report, it remains unclear to us if or when the proposal will be revived and/or formal negotiations resumed. 4. We have modified the wording of our recommendation on high-level wastes to more clearly recognize that resolving the question of ultimate responsibility for the wastes is part of any long-term solution regarding their disposal. The following are GAO’s comments on the letter dated April 11, 2001, from the Program Director, West Valley Site Management Program, New York State Energy Research and Development Authority. 1. We agree with this comment about the use of the term “cleanup level” and have changed the title of the final report and selected language throughout the report. The following are GAO’s comments on the letter dated April 13, 2001, from the Executive Director for Operations, Nuclear Regulatory Commission. 1. Where appropriate, wording reflecting NRC’s clarifications has been added to the final report.
The West Valley nuclear facility in western New York State was built in the 1960s to convert spent nuclear fuel from commercial reactors into reusable nuclear fuel. New York State, the owner of the site, and the Atomic Energy Commission--the predecessor of the Nuclear Regulatory Commission (NRC) and the Department of Energy (DOE)--jointly promoted the venture. However, the timing of the venture was poor because the market for reprocessed nuclear fuel was limited and because new, more restrictive health and safety standards raised concerns about the facility. West Valley was shut down in the 1970s, and Congress enacted the West Valley Demonstration Project Act in 1980, which brought DOE to West Valley to carry out cleanup activities. This report examines the: (1) status of the cleanup; (2) factors that may be hindering the cleanup; (3) degree of certainty in the Department's estimates of total cleanup costs and schedule; and (4) degree to which the West Valley cleanup may reflect, or have implications for, larger cleanup challenges facing DOE and the nation. DOE has almost completed solidifying the high-level wastes at West Valley, but major additional cleanup work remains. These tasks, which could take up to 40 years to complete, include decontaminating and decommissioning structures, remediating soil and groundwater, and removing nuclear wastes stored and buried onsite. The following three factors are hindering DOE's attempts to clean up West Valley: (1) DOE and New York State still have not agreed on the overall future of the site, (2) NRC cleanup standards for West Valley do not exist, and (3) cleanup planning has been limited by uncertainty about where West Valley's nuclear wastes are to go. In addition, DOE's estimates of the total costs and completion date for the West Valley cleanup are uncertain because of a lack of agreement on many strategic issues affecting the site, such as the extent to which the site is to be cleaned up, what it will then look like, how the land is to be used, and what regulatory cleanup standards are to be used. DOE's plan to deal with the underground high-level waste storage tanks at West Valley has potential implications for other DOE disposal efforts.
DeCA, headquartered at Fort Lee, Virginia, is the Department of Defense’s designated agency for managing commissaries on a worldwide basis. A Commissary Operating Board, which is comprised of representatives from each of the military services, has day-to-day operational oversight responsibilities for DeCA. The Under Secretary of Defense (Personnel and Readiness) exercises overall supervision of the commissary system. DeCA operates four regional offices that oversee the management of its commissaries. Commissaries are located in 46 states and 14 foreign countries. As of November 7, 2002, the agency had 276 stores and more than 16,000 employees under its purview. Its annual sales in fiscal year 2002 amounted to about $5 billion. In meeting its mission of providing groceries at a savings to the customer, in the most efficient and effective manner possible, DeCA strives to provide the lowest cost possible, charging patrons only for the cost of goods plus a 5-percent surcharge. DeCA receives about $1 billion in direct appropriations from Congress for its annual operating costs. These funds pay for employees’ salaries, transportation, some above-store-level information technology, and other expenses. DeCA also operates a resale stock fund for the purchase and sale of products. To the extent that savings in operating costs occur, they reduce the need for appropriated funds. The savings in store operating costs do not have an effect on the cost of merchandise sold to customers. In January 2001, DeCA issued its current strategic plan. This plan included objectives to reduce unit operating costs and reshape the workforce while maintaining or improving customer service and satisfaction. A major focus of the unit cost reduction objective was to reduce positions as well as streamline operations and develop a more efficient organization. To reshape the workforce, DeCA planned to determine the appropriate mix of skills and expertise and the appropriate level of part-time employees to carry out the reductions to reach a more efficient organization. DeCA conducts a biannual Commissary Customer Service Survey to assess customer views of products and services. A team appointed by each store director administers the survey. Customers are systematically selected while waiting in checkout lines. A predetermined number of questionnaires are collected during three periods (morning, midday, and evening) each day for 10 consecutive days during May and November each year. The survey questions are multiple choice with space available for written comments. See appendix I for a copy of the questionnaire. The completed forms are mailed to DeCA headquarters for analysis, and customer service scores are calculated for DeCA overall and for each region and store. Despite the workforce reductions, store operations and customer service have been maintained at the same level, and in some cases improved. DeCA has used various measures to eliminate 2,602 full-time positions, or 85 percent of the planned reductions as of December 31, 2002; very few employees have been separated from the agency. While downsizing and reshaping were occurring, regional officials stated that they encouraged store directors to use part-time positions to maintain store operations. DeCA officials stated the use of part-time employees has enabled store directors to better manage workload fluctuations, expand hours of operation, and thereby improve customer service. However, because DeCA’s strategic plan does not include specific goals for achieving a certain full-time/part-time workforce mix in stores, the planned percentage of part-time positions varies widely by individual store and region. Despite personnel reductions, scores for the customer satisfaction surveys completed since DeCA began the personnel reductions show the same or slightly increasing levels of customer satisfaction with the stores. Notwithstanding the improvements, managers of small stores report having difficulty balancing store operations and duties, as a result of the reductions in the number of management positions. DeCA is using workforce reductions as the primary means to achieve its goal of reducing operating costs by fiscal year 2004. As table 1 shows, DeCA plans to reduce its workforce by 3,047 full-time positions, a decrease of 17 percent from its fiscal year 2000 staff. Of these positions, the largest number (2,690) will come from reductions at the store level while 187 will come from headquarters and 170 from regional offices. As of December 31, 2002, DeCA had completed all of its workforce reductions at the regional offices and 62 percent of its planned headquarters’ reductions. It accomplished this by eliminating 137 vacant positions (114 in headquarters and 23 in the regional offices). It reduced its regional staff by another 147 positions through organizational changes and other efficiencies, including closing two area offices in one region. By the same date, DeCA had completed most of its planned workforce reductions at the store level, eliminating 2,316, or 86 percent of the 2,690 positions that it had targeted. As table 2 indicates, most of the planned store-level reductions (51 percent) are being achieved by implementing efficiency measures within stores. Efficiencies are being derived by implementing new staffing standards for each department within a store based on sales volume and other measures. The remaining reductions are accomplished by other methods, including eliminating vacant positions, closing stores, and contracting out some functions. A breakdown of completed and planned workforce reductions at the store level are as follows: 1,113 positions were eliminated by implementing the new store staffing standards based on sales volume. The remaining 261 efficiency reductions are planned in fiscal year 2003. 812 vacant positions were eliminated. A DeCA official stated that vacant positions existed because stores had historically been funded at only 90 percent of their required staffing. The elimination of these positions resulted in no personnel losses and produced no savings. 304 positions were eliminated as a result of 15 store closings. Closings can stem from Base Realignment and Closure recommendations or Under Secretary of Defense (Personnel and Readiness) approval of DeCA’s recommendations from internal assessments. An additional 49 positions will be eliminated at two stores scheduled to close in fiscal year 2003. 87 positions were eliminated by contracting out such store functions as receiving, handling, and stocking. An additional 30 positions at various stores will be eliminated in this way in fiscal year 2003. The remaining 26 planned reductions were canceled to provide positions for a new computer-aided ordering function. Although DeCA had eliminated most of the planned 2,690 positions from its stores by the end of 2002, only 122 store employees were separated from the agency by a reduction in force and an additional 341 employees retired. Other employees were reassigned or moved to lower graded positions through the reduction process. As part of the effort to reshape the workforce, all stores have begun to, or plan to, increase the use of part-time positions to manage workloads and meet the needs of customers. However, since DeCA’s strategic plan does not include specific goals for achieving a certain full-time/part-time workforce mix in stores, the planned percentage of part-time positions varies widely by individual store and region. The available data shows that the number of part-time positions in stores has increased since the personnel reduction plan went into effect. For example, the number of part-time employees rose by 8 percent in stores in the Midwest region between April 2001 and October 2002. Store directors told us that using part-time employees improved their ability to manage fluctuations in store workloads more effectively. For example, a store director said that part-time employees were used during weekends and holidays to save money. Another store director pointed out that part-time employees are available to work if there is work to do in a department or cover a peak shopping period, but they can be sent home if the work is completed. In addition, some store directors told us that a greater use of part-time workers has allowed them to increase their store operating hours. We found that 30 stores have increased their hours of operation by relying more heavily on part-time employees. For example, one store with a part-time workforce of nearly 60 percent increased its operating hours by 6 hours a week. In addition, current individual store plans call for a growth in the number of part-time positions in stores as of the end of fiscal year 2003. The Eastern and Midwest regions estimate that about 56 percent of their store positions will be part-time, and the Western Pacific region estimates 46 percent of its store positions will be part-time. Table 3 shows the range in the percentage of part-time positions that stores within each sales band plan to employ. As table 3 shows, the planned percentage of part-time positions varies widely by individual store and region. For example, one Eastern region store expects to convert all of its store positions to part time while another store in the Western Pacific region plans to have only 7 percent of its workforce as part time. While some stores are close to the 75 to 80 percent industry average for part-time positions in commercial grocery stores, the overall regional average of part-time positions indicates that there are opportunities to achieve more efficiencies through greater use of part-time positions. Store directors have the flexibility of changing the mix of full-time and part-time positions in their stores. Some store directors told us they used part-time positions primarily to meet their budget goals. One store director said that part-time positions were created to meet the store’s budget and that there were no plans to increase part-time positions in the store once the needed reductions were made. However, nearly all of the store directors we interviewed said that they could operate their stores with more part-time positions rather than full-time positions. As indicated earlier, they recognized that part-time positions provide flexibility to manage workload fluctuations more effectively. A regional director said that agencywide goals for part-time workers need to be incorporated into the strategic plan to optimize agency efforts to reshape the workforce. According to recent surveys, customer satisfaction with commissary stores has shown a modest, but steady, improvement between October 2001 and November 2002, the period when personnel reductions were being made. These improvements were registered in the overall score, ranging from 4.33 to 4.39, as well as in specific products and service categories. Table 4 includes results for 6 of the 14 questions, as well as the overall score. These scores reflect continuing satisfaction including those likely to be most immediately affected by changes in personnel levels such as checkout waiting time. As part of the effort to reshape the workforce, many directors of small stores (bands 1 and 2) told us they had to eliminate one managerial position. Small stores that have less than $60,000 in average monthly sales were required to reduce the number of managers to one manager. Small stores with $60,000 to $500,000 in average monthly sales had to reduce their number to two managers. The managers of 15 of the 28 band 1 stores told us that they are having difficulties balancing store operations with their own managerial and administrative duties along with doing the work of absent employees. Some store directors said they typically have to work more than 40 hours a week to perform all these duties. Eighty-seven percent of these 15 band 1 stores are open more than 40 hours a week. Because DeCA policy requires that a manager be present in the store when it is open for customers, when the second manager or an employee is absent, the on-duty manager has to carry his/her own workload and administrative functions, as well as the load of the absent manager or employee, typically working over the usual 40-hour work week. DeCA headquarters officials have recognized the concerns raised by managers in small stores and the need to balance their overall workload but have not yet developed a plan for doing so. Overall, DeCA’s customer satisfaction survey methodology is a reasonable approach to obtain customer feedback. It adheres to standard questionnaire design principles, and attempts to select shoppers in an unbiased fashion. However, some improvements in the analysis of survey data could be made to provide more precise and complete customer information. For example, it could adjust survey results for actual sales volumes, or report and possibly adjust for shoppers who refuse to complete the survey questionnaire. Because these factors are not considered, overall survey results could be distorted to some degree. Furthermore, the current survey does not collect information on the number of service members who do not shop at a commissary and reasons why they do not. DeCA’s current methodology appropriately attempts to obtain more survey responses from stores with higher sales volumes than stores with smaller sales volumes. DeCA places commissaries into three groups according to sales volume. They do this so that survey responses of customers in greater sales volume stores receive more emphasis than those in lower volume stores. For example, stores in the largest sales volume group are required to collect 150 responses, stores in the next largest sales group collect 100 responses, and those in the lowest sales volume group collect 50 responses. However, sales volume can vary significantly among the stores in the same group as well as between groups. A more precise methodology would entail weighting survey responses by the relative sales volume of individual stores. This approach could help DeCA avoid potential over- or underreporting of survey results, and evaluate changes in survey results that may be impacted by changes in sales volume. DeCA does not document the number of customers who refuse to participate in the customer satisfaction survey. DeCA officials told us that most customers selected to participate in the survey willingly respond, but they acknowledge that documenting the number of non-respondents would enhance survey reporting. Survey literature indicates that even nominally low levels of non-responses can influence the interpretation of survey results. There may be a particular sub-group of customers that does not respond to the questionnaire and that would not be reflected in DeCA results. By not adjusting for non-response, DeCA is assuming that respondents have similar satisfaction scores as non-respondents. Also, by collecting data on non-respondents, the agency may be able to determine if the results omit customer subgroups whose opinions may be important. For example, some dependents of service personnel may not feel comfortable participating in the survey because of language barriers. DeCA does not conduct systematic assessments of the number and types of personnel who do not shop at commissaries. The customer satisfaction survey is conducted in the stores, and thus reflects the views of those who shop at the commissaries. They do not capture the views of those who do not shop there. Although DeCA’s strategic plan addresses the need to attract more military personnel to use the commissary, DeCA officials do not know to what extent eligible customers are not shopping at a commissary and the reasons why not. Realignment of the workforce, through greater use of part-time employees, has enabled many stores to increase their operating hours and maintain or improve customer service. However, DeCA’s strategic plan does not include specific goals for the full-time/part-time workforce mix. As a result, the extent of part-time employees varied among the stores and is significantly less than current industry practice. Opportunities to achieve even more efficiencies may exist through greater use of part-time positions. In addition, small store directors have concerns about balancing their workload and maintaining store operations. Although DeCA’s customer satisfaction survey questionnaire is reasonable, survey results could be subject to some under- or over- stated because the current methodology does not explicitly weight stores’ results by sales volume and does not collect data on non-responding customers. Finally, DeCA does not know how many eligible service members do not shop at a commissary and the reasons they do not. We recommend that the Under Secretary of Defense (Personnel and Readiness), in consultation with the Chairman, Commissary Operating Board, require the Director, Defense Commissary Agency, to update the strategic plan to include goals that identify the percent of the store workforce that is expected to be full- and part-time to achieve further efficiencies from reshaping the workforce; reassess the management reductions at small stores to ensure managers can balance their workload and maintain store operations; adjust the customer survey results on the basis of sales volume and document the number of survey non-respondents and their reasons for not completing the questionnaire; and examine potential methods and analyses to periodically determine how many and why eligible personnel do not shop at commissaries, to identify ways to improve service and increase the number of potential customers using the commissary benefit. In commenting on a draft of this report, the Under Secretary of Defense (Personnel and Readiness) concurred with four of our five recommendations and outlined actions to be taken to address the four recommendations the department concurred with. He disagreed with our recommendation that the Defense Commissary Agency update its strategic plan to include goals that identify the percent of the store workforce that is expected to be full- and part-time, expressing the view that staff in Washington should not prescribe the full-part-time mix for stores. The intent of our recommendation was not for the Under Secretary to prescribe the workforce mix for stores but rather have the Defense Commissary Agency include agencywide goals on the projected workforce mix in its strategic plan to help achieve the goal of reshaping the workforce. Rather than being arbitrary or prescriptive, such goals, if based on considered research or best practices, could provide an important term of reference to guide staffing decisions at the local level to optimize organizational performance and cost effectiveness. We continue to believe the recommendation is an appropriate one for the Defense Commissary Agency to implement. The department’s comments are reprinted in appendix II. We performed our work at DeCA headquarters located at Fort Lee, Virginia, and DeCA’s three regional offices in the continental United States (the Eastern Regional Office in Virginia Beach, Virginia; the Midwest Regional Office in San Antonio, Texas; and the Western Pacific Regional Office in Sacramento, California). Due to travel costs and time constraints, we did not do any work at the European Regional Office in Germany; however, the total number of reductions shown for DeCA does include positions in the European Region. To determine the status of DeCA’s personnel reduction plan, we obtained data from DeCA headquarters and each regional office on the number of reductions planned by region by store as well as made as of December 31, 2002. We also analyzed reduction-in-force data to determine the actual or estimated impact on store employees. We also reviewed DeCA’s strategic plan to document DeCA’s plans for reducing unit operating costs and reshaping the workforce. We did not validate the cost savings reported by DeCA. To determine how store operations and customer service have been affected by the personnel reductions, we interviewed officials at DeCA headquarters and the three regional offices in the United States. We also interviewed store directors at eight stores that were near the Eastern and Midwest regional offices. In addition, we also conducted telephone interviews with either the store directors or managers for 38 band 1 and 2 stores in the continental United States (defined as having average monthly sales volume of less than $1 million), resulting in interviews of all 41 band 1 and 2 stores in the continental United States. We also determined the planned use of part-time positions by each store in the three regional offices visited. Finally, we also reviewed and discussed the Commissary Customer Service Survey results for the surveys conducted in October 2001 and May and November 2002, to identify changes in the satisfaction scores as the personnel reductions were being implemented. To determine if the DeCA customer satisfaction survey methodology is reasonable, we reviewed DeCA’s questionnaire and methodology and contrasted these to standard questionnaire design and statistical sampling procedures used in industry and government research. We also interviewed DeCA officials responsible for administering the survey regarding their analysis of survey results. We also observed the survey being conducted at the Fort Myer store in Virginia in November 2003. We conducted our review from July 2002 through January 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Personnel and Readiness); the Chairman, Commissary Operating Board; Director, Defense Commissary Agency; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on GAO’s Web site at www.gao.gov and to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Michael Kennedy, Leslie Gregor, Betsy Morris, Curtis Groves, and Nancy Benco.
In response to concerns about the impact of proposed cuts in the Defense Commissary Agency's workforce, the House Armed Services Committee placed in its report on the Bob Stump National Defense Authorization Act for Fiscal Year 2003 a requirement that we evaluate the effect of the personnel reductions. Specifically, we assessed (1) the status of personnel reductions and how they have affected store operations and customer service, and (2) whether the agency uses a reliable methodology to measure customer satisfaction with its commissaries. The Defense Commissary Agency's commissary operations and customer services have been maintained at the same level, and in some cases improved, despite the recent reductions in workforce. As of December 31, 2002, the agency had completed most of its 3,047 planned personnel reductions in full-time positions. It accomplished this primarily by achieving efficiencies or eliminating vacant positions in the stores. Only 122 employees have been separated and 341 retired as a result of the personnel cutbacks. A major focus of DeCA's personnel reductions, as outlined in its strategic plan, was to reshape the workforce and develop a more efficient organization. We found that commissaries are making greater use of part-time employees because of the reductions. This has allowed some stores to increase their operating hours to better meet customer needs. It has also given store managers more flexibility in meeting workload fluctuations. However, DeCA's strategic plan does not include specific goals for achieving a certain full-time/part-time workforce mix in stores. As a result, the planned percentage of part-time positions varies widely by store. A recent customer satisfaction survey showed that commissary patrons expressed high satisfaction with their overall shopping experience, as well as with such key indicators as time waiting in line and convenient hours. However, the managers of the smaller commissaries reported concerns over balancing workload and maintaining store operations. We found that the Commissary Customer Satisfaction Survey methodology is reasonable. However, some improvements in the analysis of survey data could ensure that the findings are more complete and consistent. Such changes could include adjusting survey results for the volume of sales at individual stores or for the number of shoppers who refuse to fill out the questionnaire. Furthermore, the current survey does not collect information on the number of, and reasons why, potential customers do not shop at their local commissaries.
Within the Department of Health and Human Services (HHS), CMS is responsible for overseeing Medicaid at the federal level, while states are responsible for the day-to-day operations of their Medicaid programs. Under section 1115 of the Social Security Act, the Secretary of HHS may waive certain Medicaid requirements to allow states to implement demonstrations through which states can test and evaluate new approaches for delivering Medicaid services that, in the Secretary’s judgment, are likely to assist in promoting Medicaid objectives. Prior to the enactment of PPACA, states that wanted to expand Medicaid coverage to childless adults could do so only under a demonstration. While states may now expand their programs to cover these individuals through a state plan amendment, some states have expanded coverage under demonstrations in order to tailor coverage for this group in a manner that differs from what federal law requires. For example, states are not permitted to exclude coverage of mandatory benefits, such as NEMT, under their state plans, but they may do so by obtaining a waiver of the requirement under a demonstration. Recently, the Secretary of HHS approved various states’ demonstrations to test alternative approaches, such as allowing states to use Medicaid funds to provide newly eligible enrollees with premium assistance to purchase private health plans in their respective state marketplace, or to exclude from coverage certain mandatory Medicaid benefits, such as NEMT, for newly eligible enrollees. CMS has required those states that have obtained approval to exclude the NEMT benefit for one year under a demonstration to submit annual evaluations on the effect of this change on access to care, which will inform the agency’s decision to approve any extension requests. Evaluations by research organizations identified the lack of transportation as a barrier to care that can affect costs and health outcomes. For example, a survey of adults in the National Health Interview Survey found that limited transportation disproportionally affected Medicaid enrollees’ access to primary care. A study by the Transportation Research Board found that individuals who miss medical appointments due to transportation issues could potentially exacerbate diseases, thus leading to costly subsequent care, such as emergency room visits and hospitalizations. We also previously reported that there are many federal programs, including Medicaid, that provide the NEMT benefit to the transportation-disadvantaged population. However, in this work we found that coordination of NEMT programs at the federal level is limited, and there is fragmentation, overlap, and the potential for duplication across NEMT programs. As a result, individuals who rely on these programs may encounter fragmented services that are narrowly focused and difficult to navigate, possibly resulting in NEMT service gaps. Among the 30 states that expanded Medicaid as of September 30, 2015, 25 reported that they did not undertake efforts to exclude the NEMT benefit for newly eligible Medicaid enrollees and were not considering doing so. Three states reported pursuing such efforts, and two states did not respond to our inquiry, although CMS indicated that neither of these states undertook efforts to exclude the NEMT benefit. (See fig. 1.) Three states (Indiana, Iowa, and Arizona) reported undertaking efforts to exclude the NEMT benefit under a demonstration as part of a broader health care initiative to expand Medicaid in their respective states. However, only Indiana and Iowa had received approval from HHS for these waivers as of September 30, 2015, while Arizona was still seeking approval. Indiana: Indiana’s effort to exclude the NEMT benefit from coverage pre-dates PPACA and is not specific to newly eligible enrollees under the state’s expansion. Beginning in February 2015, Indiana expanded Medicaid under a demonstration that provides two levels of coverage for newly eligible enrollees, depending on their income level and payment of premiums. As part of this demonstration, the state received approval to exclude the NEMT benefit for newly eligible enrollees. Indiana’s efforts to implement its Medicaid expansion are based, in part, on another demonstration that the state has had in place since 2008. Under this older demonstration, which provided authority for the state to offer Medicaid coverage to certain uninsured adults, NEMT was not a covered benefit for this population. Iowa: Iowa expanded Medicaid in response to PPACA through two demonstrations beginning in January 2014. Under these demonstrations, the state offers two separate programs for newly eligible enrollees: one that offers Medicaid coverage administered by the state to enrollees with incomes up to 100 percent of the FPL, and a second that offers premium assistance to purchase private coverage through the state’s health insurance marketplace for those enrollees with incomes from 100 to 133 percent of the FPL. For both of these demonstrations, the state received approval to exclude the NEMT benefit. Similar to Indiana, Iowa’s effort to exclude the NEMT benefit for a portion of its Medicaid population pre-dates PPACA. In July 2005, Iowa expanded Medicaid to certain populations under a demonstration with limited benefits that did not include NEMT. Arizona: When Arizona expanded Medicaid in January 2014, it had not sought to exclude the NEMT benefit for newly eligible enrollees. However, when the state submitted a request on September 30, 2015, to extend its longstanding demonstration, it sought approval to exclude the NEMT benefit. Arizona’s proposed extension would require newly eligible adults, including those with incomes from 100 to 133 percent of the FPL, to enroll in a new Medicaid program that includes enrollee contributions into an account that can be used for non-covered services and an employment incentive program. The proposed extension, including the request to exclude the NEMT benefit, was under review, as of November 2015. Officials from these three states cited several reasons for their efforts to exclude the NEMT benefit, including a desire to align Medicaid benefits with private insurance plans, which typically do not cover this benefit. Indiana officials indicated that when the state initially developed its demonstration in 2008, they designed benefits for a low-income population that tended to be employed. Thus, under that demonstration they offered benefits that resembled private insurance in an effort to familiarize enrollees with private coverage. This experience largely influenced the state’s decision under its current demonstration to exclude the NEMT benefit for newly eligible enrollees. Iowa officials reported that when the state expanded Medicaid, they wanted Medicaid benefits to look like a private insurance plan—with the hope of limiting disruptions in service as fluctuations in income could result in changes to enrollees’ coverage. While Arizona officials also cited the state’s intent to align Medicaid benefits with private health insurance, they also noted that excluding the NEMT benefit would be one way to contain costs. Of the remaining 25 Medicaid expansion states, 14 offered reasons for why they did not exclude the NEMT benefit for newly eligible enrollees. Officials from 8 states reported they did not pursue such efforts because they considered the NEMT benefit critical to ensuring enrollees’ access to care. Officials from an additional 4 states reported that they wanted to align benefits for the newly eligible enrollees with those offered to enrollees covered under the traditional Medicaid state plan. Officials from 2 other states reported that the newly eligible Medicaid enrollees did not significantly increase their program enrollment, and therefore, there was no need to alter this benefit. The two states that excluded the NEMT benefit are in different stages of completing required evaluations of the effect of this exclusion on access to care. Research and advocacy groups indicated that excluding the NEMT benefit could affect enrollees’ access to services and costs of coverage, and could set a precedent for the Medicaid program moving forward. The two states that obtained approval to exclude the NEMT benefit for newly eligible Medicaid enrollees—Indiana and Iowa—are at different stages of evaluating the effect this will have on enrollees and have different time frames for reporting their results. Indiana officials indicated that the state is currently working with CMS on the design of its evaluation and must submit results to CMS by February 29, 2016. According to a draft of the evaluation design, the state plans to survey enrollees and providers to compare the experiences of Medicaid enrollees with and without the NEMT benefit with respect to missed appointments, preventative care, and overall health outcomes; the state also seeks to determine whether enrollees residing in certain parts of the state are more affected by a lack of this benefit. Similarly, Iowa, which excluded the NEMT benefit for all newly eligible enrollees beginning in January 2014, was required to submit a series of independent analyses to CMS and recently received approval to continue its exclusion of this benefit until March 2016. The state conducted an analysis to determine whether newly eligible enrollees’ access to services was affected and reported its results in April 2015. Developed in close consultation with CMS, the analysis focused on the comparability of experiences of enrollees covered under the Medicaid state plan (who have the NEMT benefit) with newly eligible Medicaid expansion enrollees (who do not have the NEMT benefit). With such a focus, the analysis sought to determine whether excluding the NEMT benefit presented more of a barrier to obtaining services than an enrollee would have otherwise experienced under the state’s Medicaid state plan. Using enrollee surveys, the analysis found little difference in the barriers to care experienced by the two groups of enrollees as a result of transportation- related issues. For example, the analysis noted that about 20 percent of enrollees in both groups reported usually or always needing help from others to get to a health care appointment. Additionally, the analysis identified comparability between both groups of enrollees in terms of their reported unmet need for transportation to medical appointments (about 12 percent of both groups) and reported worry about the ability to pay for the associated costs (13 percent of both groups). However, looking within the group of newly eligible enrollees without the NEMT benefit, the Iowa evaluation found that those with lower incomes— under 100 percent of the FPL—tended to need more transportation assistance and have more unmet needs than those with higher incomes. For example, 25 percent of newly eligible enrollees with incomes under 100 percent of the FPL reported needing help with transportation, compared with 11 percent of higher income newly eligible enrollees; 15 percent of newly eligible enrollees with incomes under 100 percent of the FPL reported an unmet need for transportation, compared with 5 percent of higher income newly eligible enrollees; and 14 percent of newly eligible enrollees with incomes under 100 percent of the FPL reported that they worried a lot about paying for transportation, compared with 6 percent of higher income newly eligible enrollees that reported that they worried a lot about paying for transportation. HHS recently approved Iowa’s amendment to continue its waiver of the NEMT benefit, although it noted concern about the lower income enrollees’ experience. In approving the state’s request, HHS cited the need for Iowa to continue evaluating the effects of the waiver in light of survey results on the type of transportation that newly eligible enrollees reported using to get to health care appointments. These results showed that newly eligible enrollees tended to rely on others, such as family and friends, to reach health care appointments more so than Medicaid state plan enrollees. Researchers who conducted the evaluation of Iowa’s program indicated that they plan to conduct additional analyses, which include some—but not all—of the suggestions we have offered. For example, our review of Iowa’s evaluation methodology suggests that linking survey responses from both groups of enrollees directly to their claims could improve the state’s understanding of enrollees’ patterns of utilization and the implications of transportation difficulties. The researchers indicated that they will link claims data with survey responses in the next evaluation and use regression modeling to determine which group of enrollees was more likely to have an unmet need due to transportation issues. Additionally, we noted that the small sample size could limit their ability to detect differences between the enrollees. The researchers indicated that their next evaluation will survey a larger sample of Medicaid enrollees covered under the state plan who have the NEMT benefit and newly eligible enrollees who do not in an effort to increase their ability to detect these differences. We agree that increasing the sample size could strengthen confidence in the results. We also noted that the researchers did not consider whether survey respondents lived in a rural or urban area, which can be important because research shows that the need to travel longer distances and the lack of public transportation in rural areas can pose challenges for individuals seeking services. The researchers indicated that they did not stratify the groups by rural or urban areas because of a concern about inadequate sample sizes in certain counties and because the need for transportation in Iowa is not unique to the pursuit of health care services, but also poses a challenge in other aspects of residents’ lives. While stratifying survey results by rural and urban areas could be relevant in evaluating enrollees’ access to care or unmet need, the researchers do not plan to include a rural and urban stratification in the next evaluation. CMS officials recognized the value of a rural and urban distinction, but indicated that there is a need to balance further analysis with the ability to generate results expeditiously and facilitate decision making on the waiver. Officials from the 10 research and advocacy groups we interviewed— which represent Medicaid enrollees, underserved populations, and health care providers—noted potential concerns of excluding the NEMT benefit as it relates to enrollee access to services and costs of coverage. Access to Services: Officials from 9 of the 10 groups we interviewed indicated that excluding the NEMT benefit would impede newly eligible enrollees’ ability to access health care services, particularly individuals living in rural or underserved areas, as well as those with chronic health conditions. For example, officials from one national group that represents underserved populations indicated that the enrollees affected by the lack of the NEMT benefit will be those living in rural areas that must travel long distances for medical services. Another group that represents providers in Iowa also cited the difficulty faced by enrollees that live in rural areas by noting that some of the patients they served have had to cancel their medical appointments because the patients do not have a car, money to pay for gas, or access to public transportation. With respect to enrollees with chronic health conditions, one group that represents transportation providers (and others who support mobility efforts) noted that transportation can be a major barrier for individuals who are chronically ill and need recurring access to life-saving health services. Similarly, another group that represents community health centers specified that those with mental health conditions are particularly vulnerable due to a lack of transportation. Costs of Coverage: Officials from 5 of the 10 groups we interviewed also noted that efforts to exclude the NEMT benefit can have implications on the costs of care because patients without access to transportation may forgo preventive care or health services and end up needing more expensive care, such as ambulance services or emergency room visits. For example, a national group that represents providers of services for low-income populations noted that for people who are receiving regular substance abuse treatments, missing appointments can make them vulnerable to relapsing, which ultimately drives up the cost of their care. Another national group that represents underserved populations indicated that they have seen low-income individuals who do not have a car and cannot afford public transportation use higher-cost care from emergency rooms for their medical problems because they cannot otherwise access care. One other group that represents providers noted that by driving up the cost of care, a lack of transportation will ultimately trickle down to lower reimbursement rates for providers. Despite these potential implications, officials from 9 of the 10 groups we interviewed acknowledged various advantages of a state expanding Medicaid even with a more limited benefit. For example, officials from 4 groups remarked that some coverage is better than no coverage in light of the significant health care needs among low-income populations. These groups recognized the political challenges that have driven state decisions whether to expand Medicaid and the concessions that are needed for an expansion to occur. For example, officials from a group that represents providers in Iowa indicated that although introducing variations in Medicaid programs adds complexity for providers, patients, and the state, flexibility is important in helping a state find a coverage solution that works in its political climate. Similarly, an advocacy group from one state acknowledged that an expansion with full traditional Medicaid benefits was never going to be achieved in that state, given the political environment. As such, groups within that state’s provider community broadly supported the state’s effort to expand Medicaid—even without the NEMT benefit—because so much of the population was uninsured and needed this coverage. However, while acknowledging their preference for states to expand Medicaid, three groups we spoke with maintained their concerns about the effects of such efforts on enrollees’ access to care. Officials from two of these groups said that improvements in the number of people covered should not be achieved by eroding essential services, while an official from the other group questioned the value of having health coverage if an enrollee is unable to get to the location where services are provided. Further, officials from six of the groups we interviewed were concerned that HHS’s approvals of state efforts to exclude the NEMT benefit potentially provide other states with an incentive to pursue similar efforts. These six groups raised concerns that every time HHS approves such an effort, a new baseline is created for what states may request in an effort to exclude core Medicaid services. We provided a draft of this report to HHS for comment. The department provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Carolyn L. Yocom at (202) 512-7114 or [email protected], or Mark L. Goldstein at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the individuals named above, other key contributors to this report were Susan Anthony, Assistant Director; Julie Flowers; Sandra George; Drew Long; Peter Mann-King; JoAnn Martinez-Shriver; Bonnie Pignatiello Leer; Laurie F. Thurber; and Eric Wedum.
Medicaid, a federal-state health financing program for certain low-income individuals, offers NEMT benefits to individuals who are unable to provide their own transportation to medical appointments. This benefit can be an important safety net for program enrollees as research has identified the lack of transportation as affecting Medicaid enrollees' access to services. Under PPACA, states can opt to expand eligibility for Medicaid to certain adults. However, some states have excluded the NEMT benefit for these newly eligible enrollees by obtaining a waiver of the requirement under the authority of a Medicaid demonstration project. GAO was asked to explore state efforts to exclude the NEMT benefit for newly eligible Medicaid enrollees, and the potential implications of such efforts. This report examines (1) the extent to which states have excluded this benefit for newly eligible enrollees, and (2) the potential implications of such efforts on enrollees' access to services. GAO contacted the 30 states that expanded Medicaid under PPACA as of September 30, 2015; reviewed relevant documents and interviewed officials in the 3 states that have taken efforts to exclude the NEMT benefit; reviewed prior research on transportation for disadvantaged populations; and interviewed officials from CMS, the federal agency that oversees Medicaid, and 10 research and advocacy groups based on referrals from subject-matter experts and knowledge of the NEMT benefit. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. States' efforts to exclude nonemergency medical transportation (NEMT) benefits from enrollees who are newly eligible for Medicaid under the Patient Protection and Affordable Care Act (PPACA) are not widespread. Of the 30 states that expanded Medicaid as of September 30, 2015, 25 reported that they did not undertake efforts to exclude the NEMT benefit for newly eligible enrollees, 3 states reported pursuing such efforts, and 2 states—New Jersey and Ohio—did not respond to GAO's inquiry. However, the Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), indicated that neither New Jersey nor Ohio undertook efforts to exclude the NEMT benefit. Two of the three states pursuing efforts to exclude the NEMT benefit—Indiana and Iowa—have received waivers from CMS to exclude the benefit, and are in different stages of evaluating the effect these waivers have on enrollees' access to care. Indiana's draft evaluation design describes plans to survey enrollee and provider experiences to assess any effect from excluding the NEMT benefit. Iowa's evaluation largely found comparable access between enrollees with and without the NEMT benefit; however, it also found that newly eligible enrollees beneath the federal poverty level tended to need more transportation assistance or have more unmet needs than those with higher incomes. Officials from the groups that GAO interviewed identified potential implications of excluding the NEMT benefit, such as a decrease in enrollee access to services and an increase in the costs of coverage. For example, nearly all of the groups indicated that excluding the NEMT benefit would impede access to services, particularly for those living in rural areas, as well as those with chronic health conditions.
As the government’s lender of last resort to the nation’s farmers, FSA lends federal funds directly to farmers who are unable to obtain financing elsewhere at reasonable rates and terms. It also guarantees loans made by other agricultural lenders. The emergency disaster loan program—in operation since 1949—is one of several FSA loan programs. Emergency disaster loans cover actual physical and production losses so that farmers can return to normal farming operations. The emergency loan program consists entirely of subsidized loans, offered at interest rates below those offered by other federal farm loan programs. As of January 1996, the emergency loan interest rate was 3.75 percent, while rates for other types of loans, such as operating and farm ownership loans, were 6.5 percent and 7.0 percent, respectively. Emergency loans are made available in specific counties when disasters are declared by the President, the Secretary of Agriculture, or the FSA Administrator. Farmers may use loan funds for a variety of purposes, including land restoration, farm operating costs, family living expenses, and debt refinancing. Under the Food Security Act of 1985, applicants are ineligible for emergency loans for crop losses if they did not purchase insurance for crops planted or harvested after December 31, 1986, assuming it was available. However, in most years, the Congress has waived this requirement. Currently, the maximum emergency loan assistance available to a farmer for any single disaster is the amount necessary to restore the farm to its condition preceding the disaster, not to exceed (1) the sum of 80 percent of the production losses plus 100 percent of the physical losses not reimbursed by other sources or (2) $500,000, whichever is less. Farmers can qualify for additional emergency loans following subsequent disasters. There is no limit on the total amount of emergency loan debt that a farmer may accrue. Over the years, changes have occurred in the emergency disaster loan program. The program was expanded in the 1970s to achieve objectives other than the recovery from actual disaster losses, such as helping farmers survive during periods of financial stress or make major adjustments in their farming operations. However, during the 1980s, restrictions were placed on the availability of emergency loans, including the $500,000 limit for each disaster. As shown in table 1, the number and total value of emergency loans have varied greatly. Problems with FSA’s farm loan portfolio are not new. In 1990, we identified FSA’s farm loan programs as one of 17 high-risk areas especially vulnerable to waste, fraud, abuse, and mismanagement. In 1992, we reported that the taxpayers’ interests were not being protected in these programs, the agency had evolved into a source of continuous farm credit for many borrowers, and billions of dollars in debt were being written off. Large losses continue to plague the programs. We also issued two reports that focused specifically on emergency disaster loans. In November 1987, we reported that delinquencies in the emergency disaster loan program were increasing, and we suggested that the Congress consider whether credit, particularly less restrictive credit, was the proper vehicle for providing disaster relief and whether a proper balance of risk existed between the farmer and the government. In September 1989, we reported on the federal government’s strategies for responding to natural disasters affecting agriculture—direct cash payments, subsidized loans, and subsidized insurance (crop insurance).We concluded that crop insurance was the most effective of the three disaster assistance strategies in part because it could minimize the government’s costs. Although the amount of the outstanding emergency loan principal and the amount owed by delinquent borrowers have declined, FSA’s emergency loan program continues to exhibit the problems that we first identified in 1987. From 1989 through 1995, the program has lost over $6.1 billion in debt forgiveness. Moreover, much of the remaining portfolio is at high risk of failure. As shown in table 2, from fiscal years 1989 through 1995, FSA forgave over $6.1 billion in principal and interest held by over 35,000 borrowers who did not meet their payment obligations. In commenting on this report, agency officials noted that many of the past losses, as well as much of the risk in the current portfolio, are primarily attributable to past lending policies that have since been changed (see further discussion in the agency comments section). Also, they noted that the federal government’s overall exposure to risk has decreased because the size of the program has declined. Although FSA has forgiven billions of dollars in emergency loans to borrowers who have had problems repaying their debt, the portfolio continues to present a high risk of substantial losses to the government. Much of the portfolio is held by borrowers who are delinquent or who have previously had difficulty repaying emergency or other types of FSA loans. Furthermore, borrowers have already had difficulty repaying recent loans, which reflect the most current changes to the agency’s emergency lending policies and practices. As of September 30, 1995, FSA had over $3 billion in outstanding emergency loan principal to 42,093 borrowers. Of that amount, about $1.8 billion, or 58.6 percent, was held by borrowers who were delinquent on emergency loans, a percentage that has been fairly consistent over the last several years, as shown in table 3. Payment status alone, however, does not provide a complete measure of the potential risk associated with the portfolio. This indicator excludes borrowers who may be current on emergency loan payments but who have previously had problems repaying emergency loans or other types of FSA farm loans. These problems required FSA to restructure payment schedules or forgive debt. As of September 30, 1995, such borrowers held approximately $665 million, or 21.8 percent, of the outstanding emergency loan principal. When this principal is combined with the principal owed by delinquent borrowers, about $2.5 billion, or 80.4 percent, of the outstanding emergency loan principal is held by borrowers who are at risk. This overall picture of potential risk in the emergency loan portfolio reflects both past and present lending policies and practices. To better assess the risk associated with current loans, we examined the status of loans made during fiscal years 1992 through 1994. FSA made 6,302 emergency loans totaling $279 million to 5,753 borrowers during this period. Although these loans are relatively new—from 1 to 4 years old—repayment problems have already surfaced. Through September 30, 1995, FSA had forgiven $1.2 million in uncollectible principal and interest on 41 of these loans. In addition, on the basis of our sample of 600 loans, we estimate that payments were delinquent on approximately 25 percent of the loans as of January 10, 1995. The likelihood that farmers will repay emergency loan debt is diminished by the nature of the loans. Emergency loans are inherently riskier than other types of farm loans because they are made to help farmers recover from losses rather than generate new income. This problem is compounded by weak lending policies and the agency’s failure to implement existing lending requirements. Three lending policies expose the government to potential losses by allowing borrowers who have poor credit histories or are in a very weak financial position to obtain loans. First, the provisions of the Consolidated Farm and Rural Development Act as amended (P.L. 87-128, Aug. 8, 1961) do not prohibit borrowers who received prior FSA debt forgiveness from obtaining additional farm loans, including emergency loans. We identified 293 borrowers who obtained $11.6 million in emergency loans during fiscal years 1992 through 1994 after having about $51 million in unpaid debt forgiven. As of September 30, 1995, 27 percent of these borrowers were already experiencing problems repaying the recent emergency loans. The following example illustrates this problem: A New York vineyard owner, an FSA farm loan borrower since 1978, received a 1993 emergency loan of $9,640 for crop losses resulting from a drought. FSA made this loan after having forgiven approximately $207,000 on five other farm loans in 1990. In January 1995, the borrower was delinquent on the 1993 emergency loan. FSA classified this borrower as being unlikely to repay the loan. Second, FSA’s method for determining an applicant’s ability to repay a loan does not provide for contingencies. The current “cash flow” lending criterion requires only that an applicant have an estimated income equal to the estimated expenses in order to qualify for a loan. It does not provide a cushion for any unanticipated expense that may occur during the life of the loan. On the basis of our sample of loans made during fiscal years 1992 through 1994, we estimate that FSA made about 62 percent of the emergency loans to borrowers whose anticipated income exceeded their expenses by less than 10 percent. Furthermore, about 17 percent of these borrowers attained these minimal cash flows only because FSA rescheduled or forgave debt on which the borrowers had failed to honor repayment schedules. As of September 30, 1995, approximately 37 percent of the borrowers with cash flow margins of less than 10 percent were behind on their loan payments or had required debt restructuring or forgiveness after receiving their emergency loans. For borrowers with cash flow margins of 10 percent or more, this percentage dropped to about 28 percent. The following example shows the kinds of problems that can arise when cash flow margins are minimal: A Michigan fruit producer applied for a 1992 emergency loan because of freeze damage. FSA approved the application, even though the applicant—an FSA borrower since 1985—had a poor payment history and a projected income exceeding his projected expenses by only $2. This borrower’s cash flow margin was low even after FSA rescheduled existing loans on which he could not make payments. In 1993, the 1992 emergency loan was rescheduled because the borrower had not made payments; in 1994, the debt was forgiven; and, as of July 1995, the borrower was again delinquent on other FSA loan payments, and the county supervisor expected additional losses to the government. Finally, there is no limit on the amount of emergency loan indebtedness an individual borrower may accumulate. Although assistance for a single disaster is limited to the amount necessary to restore a farm to its predisaster condition or $500,000, whichever is less, a farmer can obtain additional emergency loans for subsequent disasters. As of September 30, 1995, 696 borrowers had each accumulated emergency loan principal in excess of $500,000, with a cumulative total in excess of $800 million. These borrowers had a higher rate of delinquency than those with less outstanding emergency loan principal. Specifically, borrowers with more than $500,000 in emergency loan principal had a delinquency rate of 82 percent, while those with $500,000 or less in emergency loan principal had a delinquency rate of about 30 percent. The following example illustrates the large emergency loan indebtedness that a borrower can accumulate and the types of repayment problems that can result: A Maryland farmer who operated a dairy and produced multiple crops had seven emergency loans with outstanding principal balances totaling approximately $850,000 as of September 30, 1995. According to an FSA county official, the borrower has been unable to repay the emergency loans on schedule, and FSA has deferred payments for 5 years. We noted that FSA compounded this problem in 1992 by providing an emergency loan that exceeded the borrower’s eligibility level for the disaster by $101,000. FSA did not reduce the borrower’s eligible losses, as required, by the amount of reimbursements received from the USDA’s Federal Crop Insurance Corporation (FCIC). The Congress is considering legislation that would strengthen two of the lending policies that expose the government to risk. In February 1996, the Senate passed a bill that generally would prohibit USDA from making farm program loans, including emergency loans, to borrowers whose debts have been forgiven—a proposal similar to one made by USDA. This bill would also restrict a borrower’s outstanding emergency disaster loan principal to a maximum of $500,000. Reviews by FSA and USDA’s Office of Inspector General (OIG) noted weaknesses in FSA’s emergency loan lending practices. Among other things, these reviews found that FSA field officials do not always receive accurate information in determining applicants’ loan eligibility. To determine whether its field offices are complying with its lending requirements, FSA conducts Coordinated Assessment Reviews (CAR). For fiscal years 1992 through 1995, FSA completed CARs of 369 emergency loans. As shown in table 4, for at least 14 percent of the loans reviewed, FSA field offices had not verified information on the level of an applicant’s disaster loss, debt, or income before approving the loan. While FSA considers noncompliance rates of more than 15 percent to be unacceptable, any noncompliance increases the government’s financial risk. FSA officials noted, however, that in times of certain natural disasters, such as the Midwest flooding in 1993, the need to assist people quickly sometimes takes priority over following every detailed lending requirement. The OIG also found problems with lending practices. According to December 1994 and March 1995 reports, six of seven emergency loans reviewed in Wisconsin and Illinois were overstated because they were not based on the most current and accurate information available at the time of the loan closings. Specifically, six borrowers received loans totaling about $100,000 more than they were entitled to receive because FSA approved the loans on the basis of information about the borrowers’ eligibility that FSA believed to be accurate but later found to be in error. In commenting on a draft of this report, FSA officials told us that they verified the information when they approved the loans, but the information changed before they closed the loans. Furthermore, according to these officials, their current standards do not require them to reverify information that is provided by other USDA agencies, such as the Agricultural Stabilization and Conservation Service. We have previously reported that subsidized crop insurance, compared to other forms of federal assistance such as loans and direct payments, is an efficient and equitable method of providing disaster assistance. Although we did not perform a detailed analysis of why borrowers did not obtain insurance, FSA county officials reported that most borrowers who chose not to purchase insurance did so because they did not consider coverage to be cost-effective. Our sample of loans made during fiscal years 1992 through 1994, before recent crop insurance reform legislation provided coverage at minimal cost, indicates that very few borrowers obtained insurance to protect their crops against losses resulting from natural disasters, even though insurance was frequently available. More specifically, we estimate that about 96 percent of the emergency loans made during fiscal years 1992 through 1994 covered crop losses, 4 percent covered real property losses, and 8 percent covered losses of other property, including livestock. In most cases, crop insurance coverage was available to the borrowers either through FCIC or other sources. However, as shown in table 5, the borrowers frequently did not purchase coverage, even when both options were available. The table also shows that a smaller percentage of the borrowers rejected hazard insurance. The following example illustrates a situation in which a borrower did not obtain crop insurance: An Iowa corn and soybean farmer, whose emergency loan application showed annual nonfarm income of $34,500 and about $8,000 in cash and certificates of deposits, did not obtain either FCIC or other crop insurance, even though both were available. The farmer lost $64,090 in crops as a result of flooding in 1993 and received approximately $21,000 in USDA disaster assistance, as well as an emergency loan for $34,480 in 1994. As of January 1995, this borrower was delinquent on the emergency loan payments. FCIC coverage would have cost $1,151 and paid the borrower about $15,200 for crop losses. According to FSA county officials, most borrowers did not buy insurance because they did not consider coverage to be cost-effective. The Food Security Act of 1985 makes applicants ineligible for emergency crop loss assistance if federal crop insurance was available to them and they did not purchase it for crops planted or harvested after December 31, 1986. This eligibility requirement has had little impact, however, because it has been waived in most years by subsequent disaster legislation enacted to minimize the economic hardships that some farmers might face in the absence of federal assistance. These waivers have not been targeted to grant relief to selected types of borrowers. Rather, the waivers have been made available to all who were interested in obtaining emergency loans in a particular year. The crop insurance reform legislation enacted in 1994 may increase the use of crop insurance among those seeking USDA benefits, such as emergency loans, and decrease the availability of ad hoc disaster assistance for crop losses. The Federal Crop Insurance Reform Act of 1994, which became effective in 1995, generally conditions the receipt of USDA benefits, including emergency loans and price support benefits, upon an applicant’s having obtained at least the minimum level of crop insurance available under the act, known as catastrophic risk protection, at a cost ranging from $50 to $600 for a borrower. The 1994 act also made the passage of agricultural disaster assistance legislation more difficult. Apart from crop loss insurance, the Congress is now considering legislation that may increase the use of hazard insurance by farmers. The agricultural credit legislation passed by the Senate in 1996 would prohibit USDA from making emergency loans to farmers or ranchers unless the applicants had hazard insurance that insured their property at the time of the loss. The level of insurance needed to satisfy this requirement would be determined by the Secretary of Agriculture. FSA’s emergency loan program has lost billions of dollars in debt that has not been repaid, and it stands to lose billions more, given the characteristics of the borrowers currently holding emergency loans. The Congress is considering legislative changes whose implementation would help reduce the program’s losses. However, these changes would not correct the weaknesses stemming from FSA’s cash flow lending policy. The 1994 insurance reform legislation strengthens the requirement that farmers have insurance in order to receive federal assistance, including emergency loans. In most years, the Congress has waived similar requirements for obtaining emergency loans, reflecting its desire to assist farmers suffering from the economic consequences of natural disasters. The Congress has, historically, granted waivers to all farmers who sought loans within a given year. Continued use of this blanket type of waiver may contribute to concerns about equity. For example, borrowers who, on a one-time basis, neglected to obtain insurance, would be treated in exactly the same way as borrowers who have repeatedly chosen not to obtain insurance and have relied, instead, on federal assistance to cover their losses. We continue to believe that our 1989 recommendation to USDA to strengthen its cash flow lending policy has merit. More specifically, we recommend that the Secretary of Agriculture direct the FSA Administrator to develop regulations that improve the cash flow analyses used in loan-making decisions by incorporating an allowance to cover contingencies and the costs of replacing equipment. We recognize that recent legislation creates added incentives for borrowers to purchase crop insurance and that the Congress may consider many factors when deciding whether to waive the existing requirement for crop insurance. However, if the Congress decides to waive this requirement, it may wish to consider options that would more selectively target the applicants who would be eligible for the waiver and limit the amount of the loan that they could receive. These options could include (1) prohibiting borrowers who have previously been granted insurance waivers from receiving additional waivers and/or (2) reducing the amount of an emergency loan to exclude the value of the proceeds that would have been available if the borrower had chosen to purchase the required insurance. We provided copies of a draft of this report to FSA for its review and comment. In a meeting to discuss FSA’s comments, the Deputy Director for Farm Credit Programs and Farm Credit Program staff generally agreed with the report’s conclusions and recommendations. However, they believed that several additional factors should be better recognized in discussions of the loan portfolio’s risk. FSA stated that its emergency loan obligations have decreased significantly in recent years; therefore, the government’s exposure to risk has also decreased. We agree and show the decline in emergency loan obligations in table 1 of our report. FSA also noted that the past losses and much of the risk associated with the current portfolio are due to policies that are no longer in effect. We agree that the current portfolio’s problems can be linked to past policies; however, as noted in our report, not all of the policies that have contributed to these problems have been corrected. Consequently, there are still significant risks associated with the emergency loan program that could be reduced by further congressional or agency actions. Finally, FSA stated that loan repayment statistics that we developed on recent loan making indicate acceptable performance, given the agency’s role as a lender of last resort. In our view, the Congress is ultimately responsible for determining what constitutes acceptable levels of performance for these loans. Our report provides information that can help the Congress make this determination, noting, among other things, that these relatively recent loans have already shown signs of repayment problems. We incorporated other technical corrections suggested by FSA officials as appropriate. We performed our work between November 1994 and March 1996 in accordance with generally accepted government auditing standards. Our objectives, scope, and methodology are discussed in appendix I. Our methodology for sampling and analyzing data is discussed in appendix II. The emergency disaster loan survey is presented in appendix III. We are sending copies of this report to the appropriate congressional committees; interested Members of Congress; the Secretary of Agriculture; the FSA Administrator; the Director, Office of Management and Budget; and other interested parties. We will also make copies available on request. Please call me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. This review is part of a special GAO effort to address federal programs that pose a high risk of waste, abuse, and mismanagement. To gain a complete understanding of the Farm Service Agency’s (FSA) emergency loan program, we reviewed FSA’s regulations, operating instructions, and other guidance to field offices. We also interviewed officials at the agency’s headquarters in Washington, D.C., and at state and county field offices. We analyzed computerized databases on the status of loans and loan obligations provided by FSA’s Finance Office in St. Louis, Missouri. Additionally, we reviewed and analyzed our prior reports addressing emergency loans, reports issued by the U.S. Department of Agriculture’s (USDA) Office of Inspector General in December 1994 and March 1995, and the results of FSA’s internal control reviews. To obtain information on the characteristics of FSA’s emergency loan borrowers and the planned use of loan funds, we mailed a survey to FSA county officials requesting information about a randomly selected stratified sample of the loans made from fiscal years 1992 through 1994. Appendix II discusses our survey’s methodology and contains our estimates and sampling errors. Appendix III contains a copy of the survey used. We started our field work in November 1994 and used September 30, 1995, as a cutoff date for the financial information about FSA’s farm loan portfolio. This date allowed us to have relatively recent and comparable data on the financial status, including the losses, of FSA’s emergency disaster loan portfolio. We present the loss information in nominal (versus constant) dollars. We performed our work in accordance with generally accepted government auditing standards. To obtain data on the emergency disaster loans that FSA made from fiscal years 1992 through 1994, we obtained computerized records from FSA of the 6,302 loans obligated during this period. The emergency loans obligated during this period totaled $279 million. FSA provided the automated data from its obligations database for each year. We divided this universe into two strata: one stratum consisted of loans whose obligation amount was less than $75,000, and the other stratum consisted of loans whose obligation amount was greater than or equal to $75,000. We conducted a nationwide mail survey to obtain detailed data on the emergency disaster loans. The survey questionnaires were mailed on February 9, 1995, to the USDA county officials through whom the emergency loans were made. Of the 600 questionnaires mailed, 589 were returned with valid responses. These 589 valid responses represented an overall response rate of 100 percent because the remaining 11 files were either not available or the loans were never closed. The initial and adjusted universe and the number of responses by stratum are shown in table II.1. Our questionnaire appears in appendix III. We used the responses to the survey to project estimates for the universe of 6,302 loans. In addition, respondents supplied documentation supporting some of the critical facts in their responses to the survey. We used this documentation to verify the consistency of certain responses. When inconsistencies occurred and data were not available to determine the correct answer, we telephoned the county officials to obtain the correct information. Since we used a sample of emergency disaster loans to develop our estimates, each estimate has a measurable precision, or sampling error, which may be expressed as a plus/minus figure. A sampling error indicates how closely we can reproduce, from a sample, the results that we would obtain if we used the same measurement methods to take a complete count of the universe. By adding the sampling error to and subtracting it from the estimate, we can develop upper and lower bounds for the estimate. This range is called a confidence interval. Sampling errors and confidence intervals are stated at a certain confidence level—in this case, 95 percent. For example, a confidence interval of 95 percent means that in 95 out of 100 instances, the sampling procedures we used would produce a confidence interval containing the universe value that we are estimating. As table II.2 indicates, most of our estimates have relatively small sampling errors of fewer than 5 percentage points. Status of payment on loan Type of property loss covered Either FCIC or other coverage available Neither FCIC nor other coverage obtained Both FCIC and other coverage available Both FCIC and other coverage not obtained Type of property loss covered Livestock and other property (e.g., equipment) U. S. General Accounting Office Survey of U.S. Department of Agriculture (USDA) September 30, 1994) by USDA’s Farmers Home Administration (FmHA). As you know, USDA recently abolished FmHA and established the Consolidated Farm Services Agency, which has been assigned the farm loan programs that FmHA had operated. We use FmHA in this questionnaire since it was responsible for EM loans during the fiscal year 1992-94 period. This questionnaire is being sent to a sample of USDA county offices through which an EM loan was made. Your cooperation in completing this questionnaire is vital to our study. The information collected through this survey along with other information will be summarized in our report to the Congress. BACKGROUND INFORMATION ON THE BORROWER AND THE DISASTER 1. Which of the following best describes the borrower’s credit relationship with FmHA at the time this EM loan was made? (CHECK ONE.) 1. An existing FmHA direct farm loan borrower-->(CONTINUE) While some borrowers may have received more than one EM loan during the fiscal year 1992 through 1994 period, please answer all questions in this questionnaire in terms of the specific loan identified on the borrower label above. St. Louis Finance Office records indicate that the borrower had received this EM loan through your office. 2. A first-time FmHA direct farm loan borrower-->(GO TO QUESTION 3) 3. A former FmHA direct farm loan borrower who did not have an outstanding direct loan at the time of this EM loan-->(GO TO QUESTION 3) 2. If an existing direct farm loan borrower, for how long had he or she received FmHA direct loan assistance at the time this EM loan was made? (CHECK ONE.) 6. What type of natural disaster caused the loss that resulted in this EM loan? (CHECK ONE.) 1. 1. 2. 2. 5 to less than 10 years 3. 3. 4. 3. How long had the borrower been operating 5. through lease or ownership his or her farm at the time the EM loan was made? (CHECK ONE.) 6. 7. 1. Less than 5 years 8. Other (SPECIFY): 2. 5 to less than 10 years 3. 10 years or more 7. 4. In how many years of the last 5 fiscal years (Oct. 1, 1989, through Sept. 30, 1994) have disaster loans been available in the county where the borrower farms? (CHECK ONE.) In your opinion, could this disaster be considered a normal occurrence for your area, such as a reoccurring drought, or was it an unusual occurrence in your area, such as an out-of-the-ordinary flood? (CHECK ONE.) 1. 2. 5. What was the disaster designation number and date--that is, the date of the Secretary of USDA’s finding or presidential designation-- for this EM loan? (ENTER NUMBER, AND MONTH, DAY, AND YEAR.) /___/___/___/ Mo Da Yr BORROWER’S FARMING OPERATION AND FINANCIAL POSITION 9. What was the borrower’s projected level of sales of farm products for the year in which the EM loan was made? (CHECK ONE.) 8. Which of the following best describes the primary type of farming--i.e., the one with the highest gross income--that the borrower was engaged in when the disaster occurred? (CHECK ONE.) 1. Less than $40,000 2. $40,000 to $99,999 3. $100,000 to $249,999 4. $250,000 to $499,999 1. ASCS defined feed grains (e.g., corn and grain sorghum) 5. $500,000 or more 2. 3. 4. 10. What was the borrower’s total level of assets and liabilities (both farm and nonfarm) when he or she received the EM loan? (ENTER AMOUNT(S).) 5. 6. 7. 11. What was the borrower’s projected level of 8. Other crops (SPECIFY): total income and expenses (both farm and nonfarm, including family living expenses) when he or she received the EM loan? (ENTER AMOUNT(S).) 1. BORROWER’S EM LOAN 12. Other livestock and products 12. Please enter the date that the borrower submitted a complete application for this EM loan and the date that the loan closed. (ENTER MONTH, DAY, AND YEAR FOR EACH.) 1. Loan application date: 2. Loan closing date: /___/___/___/ Mo Da Yr 13. What was the total gross dollar loss that the borrower had due to the disaster? (ENTER DOLLAR AMOUNT.) 17. Was the EM loan that the borrower received equal to, more than, or less than the amount that he or she applied for? (CHECK ONE.) 1. applied for-->(GO TO QUESTION 19) 14. Which type property loss did this EM loan cover? (CHECK ONE FOR EACH.) 2. Loan amount was more than amount applied for-->(CONTINUE) 3. Loan amount was less than amount applied for-->(CONTINUE) 4. An actual amount was not specified 3. Real property (e.g., buildings) in the loan application-->(GO TO QUESTION 19) 4. Other property (e.g., equipment) 18. If the loan was for more or less than the 5. Other (SPECIFY) amount applied for, what was the reason(s) for the difference? (BRIEFLY DESCRIBE THE REASON(S).) 15. What property did the borrower pledge as security for this EM loan? (CHECK ALL THAT APPLY.) 1. 2. 3. 4. 5. Other (SPECIFY): 16. In your opinion, what percentage of the outstanding loan amount would be recovered if your agency were forced to liquidate this loan and sell the loan security property? (ENTER PERCENTAGE.) % 19. Please indicate whether or not any of the 20. If this EM loan was for refinancing, which listed factors were cited as a purpose for this EM loan by checking the ’yes’ or ’no’ box. And, for those factors that are cited ’yes’, indicate the dollar amount allocated for that specific purpose, as well as the total amount of the loan. (CHECK ONE FOR EACH.) of the following lenders held the debt that was refinanced? (CHECK ONE FOR EACH.) If yes, what was the amount? 3. Farm Credit System 4. Other commercial lender (e.g., replace real estate capital items (e.g., buildings) mortgage corporation) 5. Trade creditor (e.g., equipment dealer) 6. Other (SPECIFY) replace machinery, equipment, or livestock 21. What was the status of payments on this EM loan, as of Jan. 10, 1995? (CHECK ONE.) 1. 2. First payment not yet due 3. 4. (SPECIFY) 5. Other (SPECIFY): 22. If the first payment on this EM loan was not yet due, do you anticipate that the borrower will make the first payment on schedule? (CHECK ONE.) 1. Not applicable (first payment due date has occurred) 2. 3. No 23. As of Jan. 10, 1995, how much principal and 25. If full or partial insurance coverage was interest has been due on this EM loan and how much has been paid? (ENTER AMOUNT(S).) available, was it obtained by the borrower? (CHECK ONE FOR EACH.) 1. Federal Crop Insurance Corp. (FCIC) 2. 2. Non-FCIC crop insurance 3. Hazard insurance on real 4. Hazard insurance on 4. machinery, equipment, or livestock 5. Other (SPECIFY COVERAGE) 24. What insurance coverage was available that could have protected the borrower from the loss experienced in this disaster? (CHECK ONE.) 1. All property could have been covered by insurance -->(CONTINUE) 26. If insurance coverage was available but the borrower had not obtained it, what was the reason(s) why not? (BRIEFLY DESCRIBE THE REASON(S).) 2. Some but not all property could have been covered by insurance -->(CONTINUE) 3. No insurance coverage was available for the property-->(GO TO QUESTION 27) 27. Did the borrower receive any other disaster assistance or compensation--for example, ASCS disaster payments or FCIC supported crop insurance payments--that involved or was related to the disaster that resulted in this EM loan? (CHECK ONE.) 29. Please provide below any comments you wish to make concerning this EM loan. 1. Yes-->(CONTINUE) 2. No-->(GO TO QUESTION 29) 28. If yes, which of the following provided the assistance or compensation and, if so, how much was provided? (CHECK ONE FOR EACH AND ENTER DOLLAR AMOUNT.) If yes, how much was provided? 30. Please provide below any comments you wish to make concerning the use of EM loans to assist farmers. (SPECIFY) (SPECIFY) (CONTINUE TO NEXT PAGE) REMINDER: Please provide photocopies of three forms that were used in deciding to approve and fund this loan--the Farm and Home Plan, the Certification of Disaster Losses, and the Calculation of Actual Losses. Please provide the following information about the person who completed this questionnaire. This information will assist us if clarification of answers is necessary. Name: (Please print) Thank you for your cooperation and assistance. Consolidated Farm Service Agency: Update on the Farm Loan Portfolio (GAO/RCED-95-223FS, July 14, 1995). High-Risk Series: Farm Loan Programs (GAO/HR-95-9, Feb. 1995). Farmers Home Administration: The Guaranteed Farm Loan Program Could Be Managed More Effectively (GAO/RCED-95-9, Nov. 16, 1994). Debt Settlements: FmHA Can Do More to Collect on Loans and Avoid Losses (GAO/RCED-95-11, Oct. 18, 1994). Farmers Home Administration: Farm Loans to Delinquent Borrowers (GAO/RCED-94-94FS, Feb. 8, 1994). High-Risk Series: Farmers Home Administration’s Farm Loan Programs (GAO/HR-93-1, Dec. 1992). Farmers Home Administration: Billions of Dollars in Farm Loans Are at Risk (GAO/RCED-92-86, Apr. 3, 1992). Disaster Assistance: Crop Insurance Can Provide Assistance More Effectively Than Other Programs (GAO/RCED-89-211, Sept. 20, 1989). Farmers Home Administration: Sounder Loans Would Require Revised Loan-Making Criteria (GAO/RCED-89-9, Feb. 14, 1989). Farmers Home Administration: Problems and Issues Facing the Emergency Loan Program (GAO/RCED-88-4, Nov. 30, 1987). Federal Disaster Assistance: What Should the Policy Be? (PAD-80-39, June 16, 1980). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO analyzed the financial condition of the Department of Agriculture's multi-billion dollar farm loan portfolio, focusing on: (1) the factors that contribute to the financial risk associated with these loans; and (2) the extent to which farmers attempted to minimize the need for farm loans by purchasing insurance to protect their farming operations. GAO found that: (1) since 1989, the Farm Service Agency (FSA) has forgiven over $6 billion in emergency disaster farm loans and interest; (2) additional losses should be expected because 80 percent of the $3 billion in loan debt is held by borrowers who are already delinquent or have had difficulty repaying their loans; (3) weak FSA lending policies and practices add to the inherent risk of emergency loans; (4) FSA does not consistently implement lending safeguards to protect federal financial interests; and (5) few borrowers purchase insurance to protect their property, preferring to rely on government assistance.
The Trust Fund’s uncommitted balance depends on the revenues flowing into the fund and the appropriations made available from the fund for various spending accounts. The amount of revenue flowing into the Trust Fund has fluctuated from year to year but has generally trended upward, as shown in figure 2. Some of the fluctuation has resulted from changes in economic conditions, but some has been due to other factors. For example, during 1981 and 1982, revenues (including interest) flowing into the fund averaged about $629 million—the lowest amount in the fund’s history—because of a lapse in the collection of aviation taxes. In 1999, revenue flowing into the fund totaled about $11.1 billion, the largest amount in the fund’s history. However, after revenues peaked in 1999, the amount of revenue flowing into the Trust Fund decreased in each of the next 4 years, reaching a level of about $9.3 billion in 2003. A number of factors contributed to this decrease. For example, within the airline industry, the growth of the Internet as a means to sell and distribute tickets, the growth of low-cost airlines, and fare reductions by legacy carriers all transformed the industry and led to lower average fares. These lower fares have resulted in lower ticket taxes and less revenue going into the Trust Fund. In addition, in the same time period, a series of largely unforeseen events, including the September 11, 2001, terrorist attacks, war in Iraq and associated security concerns, the Severe Acute Respiratory Syndrome (SARS), and global recessions seriously affected demand for air travel, resulting in a decrease in airline industry and Trust Fund revenue. Since the beginning of 2004, however, Trust Fund revenues have been increasing. In fact, revenues from tax sources in 2005 were nearly as high as in 1999, although total revenues were still below peak level because less interest was earned due to a lower Trust Fund balance. Similar to the revenue picture, the annual amount of expenditures from the Trust Fund also has generally increased since the fund’s inception, but with some fluctuation. One source of fluctuation has been that the share of FAA operations paid by the Trust Fund has varied over time. Figure 3 shows how expenditures from the fund have changed over time and how they have compared with revenues. In some years, they have exceeded revenues, but in other years they have been less than revenues. In the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21), the Congress created a link between Trust Fund revenues and appropriations from the fund to try to ensure that all fund receipts, including interest, were committed to spending for aviation purposes on an annual basis. According to a provision of AIR-21, which was continued in the Century of Aviation Reauthorization Act (Vision 100)—FAA’s current authorizing legislation—total appropriations made available from the fund in each fiscal year shall equal the level of receipts plus interest in that year, and these appropriations can be used only for aviation investment programs, which are defined as FAA’s capital accounts plus the Trust Fund’s share of FAA operations. Further, the level of receipts was specified to be the level of excise taxes plus interest credited to the fund for a fiscal year as set forth in the President’s budget baseline projection for that year. As shown in figure 4, with the exception of its first four years, the Trust Fund has ended each year with an uncommitted balance; however, the amount of the uncommitted balance has fluctuated substantially over time, generally increasing when Trust Fund revenues exceed appropriations from the fund and decreasing when they are less than appropriations. As noted in the figure, the uncommitted balance has decreased substantially in recent years. The Trust Fund’s uncommitted balance peaked at over $7 billion in 1991, 1999, and 2001. In contrast, because of lapses in the taxes that accrue to the fund, at the end of 1982, the uncommitted balance was about $2.1 billion, and at the end of 1997, it was about $1.4 billion. Specifically, the Trust Fund’s uncommitted balance decreased from $7.3 billion at the end of 2001 to $4.8 billion at the end of 2002 and has continued to decrease since then, reaching about $1.9 billion at the end of 2005. However, the rate of decrease has slowed; in 2005, the uncommitted balance decreased by about $500 million, after falling by at least $900 million in each of the previous 3 years. The uncommitted balance has fallen in recent years because Trust Fund revenues have fallen short of forecasted levels by over $1 billion in 3 out of the last 4 fiscal years. For example, in 2001, the difference between forecasted revenue and actual revenue coming in to the Trust Fund was $383 million less than expected. In 2002, the difference jumped to $2.3 billion due to the impact that unanticipated external events such as the September 11, 2001, terrorist attacks had on the aviation industry. Residual effects and other factors such as the war in Iraq and the SARS outbreak lasted through 2003 and 2004, with each year’s actual revenues coming in at least $1 billion below forecasted revenues. As mentioned above, under Vision 100 and its predecessor, AIR-21, appropriations made available from the Trust Fund are based on forecasted revenues. Thus, if actual revenues approximate forecasted revenues, there should be no substantial change in the uncommitted balance. However, as shown in figure 5, for each year beginning with 2001, actual revenues, including interest, have been less than forecasted, so that in each year since then, the uncommitted balance has fallen. Based on its revenue forecast and appropriations for 2006, FAA forecasts that the Trust Fund’s uncommitted balance will decrease by the end of 2006 to about $1.7 billion. FAA forecasts that if, for 2007, the Congress continues to follow the Vision 100 formula for linking budget resources made available from the fund to expected revenues, then there will be little change in the uncommitted balance—$1.7 billion—during that year. If, instead, the Congress adopts the President’s budget request for FAA for 2007, FAA forecasts that the fund’s uncommitted balance by the end of 2007 will rise to about $2.7 billion. This higher forecasted uncommitted balance occurs because the President’s budget calls for an appropriation from the Trust Fund that is about $1 billion lower than the Vision 100 formula. In addition, compared with Vision 100, the President’s budget calls for a reduction in the appropriation to FAA from the general fund of about $500 million. Thus, in total, compared with Vision 100, the President’s budget calls for a reduction of about $1.5 billion in FAA’s appropriation. Figure 6 shows the forecasted year-end uncommitted balance under both scenarios through 2007. While the President’s budget calls for making a smaller appropriation available from the Trust Fund than under Vision 100, largely due to reductions in the AIP, it calls for greater reliance on the Trust Fund to fund FAA’s operations. Vision 100 uses the formula created in AIR-21 to determine how much funding for FAA operations should come from the Trust Fund, but the President’s budget proposal does not use this formula. Under Vision 100, the formula makes the amount of Trust Fund revenue that will be authorized for FAA operations and RE&D in a given year equal to projected Trust Fund revenues (as specified in the President’s budget) minus the authorizations for the capital accounts (AIP and F&E) in that year. Thus, under Vision 100, the Trust Fund is projected to support $4.6 billion of FAA’s operations, or 57 percent. In contrast, the President’s budget specifies a set amount of Trust Fund revenue to be used for FAA operations. Therefore, if Congress enacts the President’s budget request for FAA, the Trust Fund would provide $5.4 billion for FAA’s operations in 2007, or 65 percent of its total estimated cost for operations. Although the Trust Fund is projected to have a surplus at the end of 2007 under each of the expenditure proposals, this projection depends to a significant extent on achieving forecasted commercial passenger traffic levels and airfares, as they have the largest impact on the amount of revenues flowing into the Trust Fund. We recognize that it is difficult to anticipate future events that may significantly affect the demand for air travel, particularly since FAA makes a forecast that is contained in the President’s budget based on information available in the first quarter of the preceding fiscal year. However, our analysis shows that for each of the last 5 years, FAA’s projected revenue forecast for the President’s budget was higher than the actual amount of revenue received, as shown in figure 5. Given the differences in recent years between the forecasted revenue and actual amount of revenue received, we conducted sensitivity analyses to estimate what would happen to the Trust Fund’s uncommitted balance if Trust Fund revenues in 2006 and 2007 fall below the levels that FAA projected in March 2006. For example, table 2 shows the projected Trust Fund balances under Vision 100 and the President’s proposal and the impact if revenues, for whatever reason, are 5 percent or 10 percent less than currently projected. If revenues are 5 percent lower than projected, which they were in 2001, the Trust Fund would have a small but positive uncommitted balance under both expenditure proposals—Vision 100 and the President’s budget proposal. However, if the revenues were 10 percent lower than projected, as they were in 2004, the uncommitted balance would drop below half a billion dollars under the President’s proposal and would fall to zero by the end of 2007 under Vision 100. We believe these scenarios raise concerns because, in the past, the Trust Fund’s uncommitted balance was used to offset lower-than-expected Trust Fund revenues and decreased general fund contributions. FAA could help address these concerns by continuing to look for ways to improve efficiency and reduce costs. However, the zero-balance scenario would most likely have implications for Congress in funding FAA programs. To keep the Trust Fund from declining, the Congress could use an alternate basis for authorizing and appropriating money out of the Trust Fund that does not rely on the revenue forecast in the President’s budget. One alternative that would still maintain the link between revenues and spending would be for appropriations from the Trust Fund to be based on the actual Trust Fund revenues from the most recent year for which data are available. That would mean, for example, that the Congress would appropriate for 2007 the Trust Fund revenues received in 2005. Although that would make it less likely that the Trust Fund balance would decline further, it could also mean that a smaller appropriation would be made available for aviation. Whereas Trust Fund revenues in 2005 were about $10.8 billion, the President’s budget for 2007 forecasts Trust Fund revenues of about $11.8 billion. Future policy decisions concerning spending for aviation will affect the Trust Fund balances beyond 2007. If general fund appropriations for FAA’s operations are maintained at recent levels, future projected Trust Fund revenues under the current tax structure may be insufficient to pay for the expenditures that FAA says are needed to maintain and modernize the current system. According to FAA, its aviation infrastructure is aging, and replacing it will cost $32 billion. Even more, Trust Fund revenues would be needed to pay for those expenses if general fund appropriations for operations are reduced. Insufficient Trust Fund revenues could result in critically needed capacity-enhancing air traffic control modernization investments being deferred or canceled at a time when commercial activity is returning to or exceeding pre-September 11 levels. In addition to costs projected just to maintain FAA’s current system, additional capital expenses are on the horizon to modernize the system. Vision 100 directed the administration to develop a comprehensive plan for a Next Generation Air Transportation System (NGATS) that can accommodate the changing needs of the aviation industry and meet air traffic demands by 2025. The act chartered the Joint Planning and Development Office (JPDO) within FAA to coordinate federal and private- sector research related to air transportation. FAA leads the interagency effort that leverages expertise and resources within the Departments of Transportation, Defense, Homeland Security, and Commerce as well as at the National Aeronautics and Space Administration and the White House Office of Science and Technology Policy. The Congress appropriated $5 million to FAA in seed money in 2005, and appropriated $18 million to FAA for JPDO in 2006, while additional funding and in-kind support comes from the participating agencies. For 2007, the President’s budget requests $18 million for JPDO critical system engineering and planning efforts for NGATS, as well as funding for two NGATS systems at a combined cost of $104 million. JPDO published the Integrated Plan for the Next Generation Air Transportation System in December 2004, but the plan did not specify what new capabilities would be pursued or how much they would cost to implement and maintain. Vision 100 also directed that an annual progress report, including any changes to the Integrated Plan, be submitted at the time of the President’s budget request. In March 2006, JPDO published its 2005 Progress Report to the Next Generation Air Transportation System Integrated Plan and reported it is working to identify the longer-term costs. JPDO conducted a financial analysis of the air traffic management portions of NGATS, including examining the existing 2025 operational vision, to understand the hardware and software components that may be required to implement NGATS. However, because of the high level of uncertainty in some areas and a significant number of assumptions in others, JPDO reported more work is required before this analysis can be useful and credible. A clear understanding of proposed future capabilities for NGATS (and how they will be paid for) will be important as the Congress prepares to reauthorize FAA programs and explores financing mechanisms. While FAA has made great efforts in its cost-control program, cutting costs will remain a challenge for FAA well into the future. In 2005, FAA outsourced its flight service stations to a private contractor, resulting in total savings estimated at $2.2 billion. Also in 2005, FAA put in place a number of cost-control initiatives that affected smaller programs and that, if successful, will generate smaller levels of savings. We are reviewing options to fund FAA, at the request of this subcommittee, and we will address this issue in detail later this year. Although FAA has initiated several of these cost-control measures, these initiatives alone cannot reduce expenses enough to free up sufficient Trust Fund revenues to pay for the expenditures that FAA says are necessary to maintain and modernize the current airspace system, let alone finance future NGATS initiatives. Through the reauthorization process, the Congress will determine both the level of appropriations for aviation and the way in which that commitment will be funded. Congressional decisions pertaining to the link between annual Trust Fund revenues and appropriations made available for aviation programs, as well as the method for funding the Trust Fund, will continue to influence future Trust Fund balances. To assess the current financial status and projected financial viability of the Airport and Airway Trust Fund, we obtained financial data from FAA and interviewed FAA officials familiar with the information. To assess the comparisons of Vision 100 with the President’s budget, we analyzed the legislation and the administration’s 2007 budget proposal. We used a sensitivity analysis to project what would happen if Trust Fund revenues in fiscal years 2006 and 2007 were 5 percent and 10 percent lower than the levels projected by FAA in March 2006 under each of these proposals. Accordingly, our findings on the financial outlook of the Trust Fund are based on GAO projections, not FAA’s. We performed our work in February and March 2006 in accordance with generally accepted government auditing standards. Mr. Chairman, this concludes my prepared statement. At this time, I would be pleased to answer any questions that you or other Members of the Subcommittee may have. For further information on this testimony, please contact Dr. Gerald Dillingham at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Chris Bonham, Jay Cherlow, Tammy Conquest, Colin Fallon, David Hooper, Maureen Luna-Long, Maren McAvoy, Rich Swayze, and Matt Zisman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Airport and Airway Trust Fund was established by the Airport and Airway Revenue Act of 1970 (P.L. 91-258) to help fund the development of a nationwide airport and airway system and to fund investments in air traffic control facilities. It provides all of the funding for the Federal Aviation Administration's (FAA) capital accounts, including: (1) the Airport Improvement Program, which provides grants for construction and safety projects at airports; (2) the Facilities and Equipment account, which funds technological improvements to the air traffic control system; and (3) the Research, Engineering, and Development account, which funds continued research on aviation safety, mobility, and environment issues. In addition, at various times during its history, the Trust Fund has funded all or some portion of FAA's operations. To fund these accounts, the Trust Fund is credited with revenues from a variety of excise taxes related to passenger tickets, passenger flight segments, international arrivals/departures, cargo waybills, and aviation fuels. Including interest earned on its balances, the Trust Fund received $10.8 billion in fiscal year 2005. The various taxes that accrue to the Trust Fund are scheduled to expire at the end of fiscal year 2007. GAO was asked to provide information and analysis about the financial condition and future viability of the Trust Fund. The Trust Fund's uncommitted balance decreased from $7.3 billion at the end of fiscal year 2001 to about $1.9 billion at the end of fiscal year 2005. In 3 of the last 4 fiscal years, the Trust Fund's uncommitted balance has fallen by over $1 billion because revenues were lower than FAA forecasted due to the impact of unanticipated events such as the September 11, 2001, terrorist attacks. However, the rate of decrease has slowed; during fiscal year 2005, the uncommitted balance decreased by about $500 million. Under FAA's current authorization, appropriations from the Trust Fund are based on forecasted revenues. Thus, if actual revenues approximate forecasted revenues, there should be no substantial change in the uncommitted balance. However, for each fiscal year since 2001, because actual revenues have been less than forecasted, the uncommitted balance has fallen. Based on its revenue forecast and appropriation for fiscal year 2006, FAA forecasts that the Trust Fund's uncommitted balance will fall by the end of 2006 to about $1.7 billion. If the Congress continues to follow the formula from Vision 100--FAA's current authorizing legislation that links appropriations made available from the fund to revenue forecasts--then FAA expects there will be little change in the uncommitted balance for fiscal year 2007. If, instead, the Congress adopts the President's budget for FAA for fiscal year 2007, FAA forecasts that the fund's uncommitted balance by the end of 2007 will rise to about $2.7 billion. This higher forecasted uncommitted balance occurs because the President's budget calls for an appropriation to FAA from the Trust Fund that is about $1 billion lower than the Vision 100 formula. If revenues in fiscal years 2006 and 2007 are below forecasted levels, the Trust Fund's uncommitted balance will be less than forecasted and, in one scenario we analyzed, will reach zero by the end of 2007. This scenario raises concerns because, in the past, the Trust Fund's uncommitted balance was used to offset lower-than-expected Trust Fund revenues and decreased general fund contributions. FAA could help address these concerns by continuing to look for ways to improve efficiency and reduce costs. However, the zero-balance scenario would most likely have implications for the Congress in funding FAA programs.
With almost 700,000 civilian employees on its payroll, DOD is the second largest federal employer of civilians in the nation, after the Postal Service. Defense civilian personnel, among other things, develop policy, provide intelligence, manage finances, and acquire and maintain weapon systems. Given the current global war on terrorism, the role of DOD’s civilian workforce is expanding, such as participation in combat support functions that free military personnel to focus on warfighting duties for which they are uniquely qualified. Civilian personnel are also key to maintaining DOD’s institutional knowledge because of frequent rotations of military personnel. However, since the end of the Cold War, the civilian workforce has undergone substantial change, due primarily to downsizing, base realignments and closures, competitive sourcing initiatives, and DOD’s changing missions. For example, between fiscal years 1989 and 2002, DOD reduced its civilian workforce by about 38 percent, with an additional reduction of about 55,000 personnel proposed through fiscal year 2007. Some DOD officials have expressed concern about a possible shortfall of critical skills because downsizing has resulted in a significant imbalance in the shape, skills, and experience of its civilian workforce while more than 50 percent of the civilian workforce will become eligible to retire in the next 5 years. As a result, the orderly transfer of DOD’s institutional knowledge is at risk. These factors, coupled with the Secretary of Defense’s significant transformation initiatives, make it imperative for DOD to strategically manage its civilian workforce based on a total force perspective which includes civilian personnel as well as active duty and reserve military personnel and contractor personnel. This strategic management approach will enable DOD to accomplish its mission by putting the right people in the right place at the right time and at a reasonable cost. NSPS is intended to be a major component of DOD’s efforts to more strategically manage its workforce and respond to current and emerging challenges. This morning I will highlight several of the key provisions of NSPS that in our view are most in need of close scrutiny as Congress considers the DOD proposal. The DOD proposal would allow the Secretary of Defense to jointly prescribe regulations with the Director of the Office of Personnel Management (OPM) to establish a flexible and contemporary human resources management system for DOD—NSPS. The joint issuance of regulations is similar to that set forth in the Homeland Security Act of 2002 between the Secretary of Homeland Security and the Director of OPM for the development of the Department of Homeland Security (DHS) human resources management system. However, unlike the legislation creating DHS, the Defense Transformation for the 21st Century Act would allow the Secretary of Defense to waive the requirement for joint issuance of regulations if, in his or her judgment, it is “essential to the national security”—which is not defined in the act. While the act specifies a number of key provisions of Title 5 that shall not be altered or waived, including those concerning veterans’ preference, merit protections, and safeguards against discrimination and prohibited personnel practices, the act nonetheless would, in substance, provide the Secretary of Defense with significant independent authority to develop a separate and largely autonomous human capital system for DOD. The DOD proposal also has significant potential implications for governmentwide human capital policies and procedures and for OPM as the President’s agent and advisor for human capital matters and overseer of federal human capital management activities. In essence, the act would allow for the development of a personnel system for the second largest segment of the federal workforce that is not necessarily within the control or even direct influence of OPM. To strike a better balance between reasonable management flexibility and the need for a reasonable degree of consistency and adequate safeguards to prevent abuse throughout the government, Congress should consider making these provisions of the Defense Transformation for the 21st Century Act consistent with the Homeland Security Act of 2002, or at a minimum, providing some statutory guidance on what would constitute a situation “essential to the national security” that would warrant the Secretary of Defense to act independently of the Director of OPM. DOD states that it needs a human capital management system that provides new and increased flexibility in the way it assesses and compensates its employees, and toward that end, we understand that in implementing NSPS DOD plans to strengthen its performance appraisal systems and implement pay banding approaches as core components of any new DOD human capital system. We have a long and successful experience in using pay banding with our analyst staff as a result of the GAO Personnel Act of 1980. Certain DOD components have had a number of years of experience with pay banding through OPM’s personnel demonstration projects, authorized by the Civil Service Reform Act of 1978, to test and introduce beneficial change in governmentwide human resources management systems. For example, in 1980, the Navy personnel demonstration project, commonly referred to as the China Lake demonstration project, implemented a number of reforms including pay banding and a pay for performance system. More recently, the Civilian Acquisition Workforce personnel demonstration project (AcqDemo) was implemented in 1999 and created a pay banding system that covers part of its civilian acquisition, technology, and logistics workforce. The expected results of AcqDemo’s pay banding system include increased flexibility to assign employees as well as increased pay potential and satisfaction with advancement for employees. According to agency officials, an evaluation to OPM on AcqDemo’s progress is scheduled to be available this June. Lastly, DOD’s science and technology reinvention laboratory demonstration projects all implemented some form of pay banding and pay for performance. OPM reports that these reinvention laboratory demonstration projects have been able to offer more competitive starting salaries. Additionally some labs’ turnover experience was significantly lower among highly-rated employees and higher among employees with lower ratings. DOD’s demonstration projects clearly provide helpful insights and valuable lessons learned in connection with broad banding and pay for performance efforts. At the same time these projects and related DOD efforts involve less than 10 percent of DOD’s civilian workforce and expanding these concepts to the entire department will require significant effort and likely need to be implemented in phases over several years. As you know, there is growing agreement on the need to better link individual pay to performance. Establishing such linkages is essential if we expect to maximize the performance and assure the accountability of the federal government for the benefit of the American people. As a result, from a conceptual standpoint, we strongly support the need to expand broad banding approaches and pay for performance-based systems in the federal government. However, moving too quickly or prematurely at DOD or elsewhere can significantly raise the risk of doing it wrong. This could also serve to severely set back the legitimate need to move to a more performance and results-based system for the federal government as a whole. Thus, while it is imperative that we take steps to better link employee pay to performance across the federal government, how it is done, when it is done, and the basis on which it is done can make all the difference in whether or not such efforts are successful. In our view, one key need is to modernize performance management systems in executive agencies so that they are capable of adequately supporting more performance-based pay and other personnel decisions. Unfortunately, based on GAO’s past work, most existing federal performance appraisal systems, including a vast majority of DOD’s systems, are not designed to support a meaningful performance-based pay system. The bottom line is that in order to receive any additional performance- based pay flexibility for broad based employee groups, agencies should have to demonstrate that they have modern, effective, credible, and, as appropriate, validated performance management systems in place with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure fairness and prevent politicalization and abuse. At your request Madam Chairwoman, and that of Senator Voinovich, we identified key practices leading public sector organizations both here in the United States and abroad have used in their performance management systems to link organizational goals to individual performance and create a “line of sight” between an individual’s activities and organizational results. These practices can help agencies develop and implement performance management systems with the attributes necessary to effectively support pay for performance. More specifically, Congress should consider establishing statutory standards that an agency must have in place before it can implement broad banding or a more performanced-based pay program. As the request of Congressman Danny Davis, we developed an initial list of possible safeguards to help ensure that any additional flexibility Congress may grant for expanding pay for performance management systems in the government are fair, effective, and credible. We provided an initial list to Congressman Davis late last week. This initial list of safeguards was developed based on our extensive body of work looking at the performance management practices used by leading public sector organizations both in the United States and in other countries as well as our own experiences at GAO in implementing a modern performance management system for our own staff. We believe that the following could provide a starting point for developing a set of statutory safeguards in connection with any additional efforts to expand pay for performance systems. Assure that the agency’s performance management systems (1) link to the agency’s strategic plan, related goals, and desired outcomes, and (2) result in meaningful distinctions in individual employee performance. This should include consideration of critical competencies and achievement of concrete results. Involve employees, their representatives, and other stakeholders in the design of the system, including having employees directly involved in validating any related competencies, as appropriate. Assure that certain predecisional internal safeguards exist to help achieve the consistency, equity, nondiscrimination, and nonpoliticization of the performance management process (e.g., independent reasonableness reviews by Human Capital Offices and/or Offices of Opportunity and Inclusiveness or their equivalent in connection with the establishment and implementation of a performance appraisal system, as well as reviews of performance rating decisions, pay determinations, and promotion actions before they are finalized to ensure that they are merit-based; internal grievance processes to address employee complaints; and pay panels whose membership is predominately made up of career officials who would consider the results of the performance appraisal process and other information in connection with final pay decisions). Assure reasonable transparency and appropriate accountability mechanisms in connection with the results of the performance management process (e.g., publish overall results of performance management and pay decisions while protecting individual confidentiality, and report periodically on internal assessments and employee survey results). The above items should help serve as a starting point for Congress to consider in crafting possible statutory safeguards for executive agencies’ performance management systems. OPM would then issue guidance implementing the legislatively defined safeguards. The effort to develop such safeguards could be part of a broad-based expanded pay for performance authority under which whole agencies and/or employee groups could adopt broad-banding and move to more pay for performance oriented systems if certain conditions are met. Specifically, the agency would have to demonstrate, and OPM would have to certify, that a modern, effective, credible, and, as appropriate, validated performance management system with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, is in place to support more performance-based pay and related personnel decisions before the agency could implement a new system. In this regard OPM should consider adopting class exemption approaches and OPM should be required to act on any individual certifications within prescribed time frames (e.g., 30-60 days). This approach would allow for a broader-based yet more conceptually consistent approach in this critical area. It would also facilitate a phased-implementation approach throughout government. The list is not intended to cover all the attributes of a modern, results- oriented performance management system. Rather, the items on the list cover possible safeguards for performance management systems to help ensure those systems are fair, effective, and credible. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funds to modernize their performance management systems and ensure those systems have adequate safeguards to prevent abuse. This approach would serve as a positive step to promote high-performing organizations throughout the federal government while avoiding fragmentation within the executive branch in the critical human capital area. The Senior Executive Service (SES) needs to lead the way in the federal government’s effort to better link pay to performance. We have reported that there are significant opportunities to strengthen efforts to hold senior executives accountable for results. In particular, more progress is needed in explicitly linking senior executive expectations for performance to results-oriented organizational goals and desired outcomes, fostering the necessary collaboration both within and across organizational boundaries to achieve results, and demonstrating a commitment to lead and facilitate change. These expectations for senior executives will be critical to keep agencies focused on transforming their cultures to be more results- oriented, less hierarchical, more integrated, and externally focused and thereby be better positioned to respond to emerging internal and external challenges, improve their performance, and assure their accountability. Given the state of agencies’ performance management systems, Congress should consider starting federal results-oriented pay reform with the SES. In that regard and similar to the Homeland Security Act, the proposed NSPS would increase the current total allowable annual compensation limit for senior executives up to the Vice President’s total annual compensation. However, the Homeland Security Act provides that OPM, with the concurrence of the Office of Management and Budget, certify that agencies have performance appraisal systems that, as designed and applied, make meaningful distinctions based on relative performance. NSPS does not include such a certification provision. Congress should consider requiring OPM to certify that the DOD SES performance management system makes meaningful distinctions in performance and employs the other practices used by leading organizations to develop effective performance management systems that I mentioned earlier, before DOD could increase the annual compensation limit for senior executives. The proposed Defense Transformation for the 21st Century Act includes provisions intended to ensure collaboration with employee representatives in the planning, development, and implementation of a human resources management system. For example, employee representatives are to be given the opportunity to review and make recommendations on the proposed NSPS. The Secretary of Defense and the Director of OPM are to provide employee representatives with a written description of the proposed system, give these representatives at least 30 calendar days to review and make recommendations on the proposal, and fully and fairly consider each recommendation. DOD may immediately implement the parts of the proposed system that did not receive recommendations or those recommendations they chose to accept from the employee representatives. While these provisions are designed to help assure that employees and their authorized representatives play a meaningful role on the design and implementation of any new human capital system, DOD does not have a good track record in reaching out to key stakeholders. In fact, it is my understanding that neither DOD employees nor their authorized representatives played a meaningful role in connection with the design of the legislative proposal that is the subject of this hearing. For the recommendations from the employee representatives that the Secretary and the Director do not accept, the Secretary and the Director are to notify Congress and meet and confer with employee representatives in an attempt to reach agreement on how to proceed with these recommendations. If an agreement has not been reached after 30 days, and the Secretary determines that further consultation with employee representatives will not produce agreement, the Secretary may implement any or all parts of the proposal, including any modifications made in response to the recommendations. The Secretary is to notify Congress of the implementation of any part of the proposal, any changes made to the proposal as a result of recommendations from the employee representatives, and the reasons why implementation is appropriate. Although the procedures called for in the DOD proposal are similar to those enacted in the Homeland Security Act, the latter states explicitly the intent of Congress on the importance for employees to be allowed to participate in a meaningful way in the creation of any human resources management system affecting them. To underscore the importance that Congress places on employee involvement in the development and implementation of NSPS, Congress should consider including similar language as that found in the Homeland Security Act. More generally, and aside from the specific statutory provisions on consultation, the active involvement of employees will be critical to the success of NSPS. We have reported that the involvement of employees both directly and indirectly is crucial to the success of new initiatives, including implementing a pay for performance system. High-performing organizations have found that actively involving employees and stakeholders, such as unions or other employee associations when developing results-oriented performance management systems helps improve employees’ confidence and belief in the fairness of the system and increases their understanding and ownership of organizational goals and objectives. This involvement must be early, active, and continuing if employees are to gain a sense of understanding and ownership for the changes that are being made. The legislation has a number of provisions designed to give DOD flexibility to help obtain key critical talent. Specifically, it allows DOD greater flexibility to (1) augment the use of temporary appointment authorities, (2) hire experts and consultants and pay them special rates, (3) define benefits for overseas employees, and (4) enter into personal services contracts for experts and consultants for national security missions, including for service outside of the United States. Specifically, the Secretary would have the authority to establish a program to attract highly qualified experts in needed occupations with the flexibility to establish the rate of pay, eligibility for additional payments, and terms of the appointment. These authorities give DOD considerable flexibility to obtain and compensate individuals and exempt them from several provisions of current law. While we have strongly endorsed providing agencies with additional tools and flexibilities to attract and retain needed talent, the broad exemption from some existing ethics and other personnel authorities without prescribed limits on their use raises some concern. Accordingly, Congress should consider placing numerical or percentage limitations on the use of these provisions or otherwise specifically outline basic safeguards to ensure such provisions are used appropriately. The proposed Defense Transformation for the 21st Century Act would provide the Secretary with a number of broad authorities related to rightsizing and organizational alignment. These include authorizing the Secretary to restructure or reduce the workforce by establishing programs using voluntary early retirement eligibility and separation payments, or both. In addition, the Secretary would be allowed to appoint U.S. citizens who are at least 55 years of age to the excepted service for a period of 2 years, with a possible 2-year extension, subject only to certain provisions preventing displacement of current employees. The proposal also provides that annuitants who receive an annuity from the Civil Service Retirement and Disability Fund and become employed in a position within the Department of Defense shall continue to receive their unreduced annuity. This and selected other NSPS provisions will clearly have incremental budget implications for which we have not seen any related cost estimate. Furthermore, this and other selected NSPS provisions would create an unlevel playing field for experienced talent within the civilian workforce. Authorities such as voluntary early retirements have proven to be effective tools in strategically managing the shape of the workforce. I have exercised the authority that Congress granted me to offer voluntary early retirements in GAO in both fiscal years 2002 and 2003 as one element of our strategy to shape the GAO workforce. However, given DOD’s past efforts in using existing rightsizing tools, there is reason to be concerned that DOD may struggle to effectively manage additional authorities that may be provided. While DOD has used existing authorities in the past to mitigate the adverse effects of force reductions, the approach to reductions was not oriented toward strategically shaping the makeup of the workforce. We have previously reported that the net effect of lack of attention to workforce shaping is a civilian workforce that is not balanced by age or experience, which risks the orderly transfer of institutional knowledge. DOD thus may be challenged in using new authorities in a cohesive, integrated way that supports achieving mission results, absent a comprehensive and integrated human capital strategy and workforce plan. In the past, OPM has managed its authority to reemploy an annuitant with no reduction in annuity on a case-by-case basis. The NSPS proposal, which broadly grants such treatment, raises basic questions about the intent and design of the federal benefits or total compensation of federal employees and obviates the importance of establishing an effective DOD partnership with OPM in prescribing the use of this authority. As noted previously, providing such authority only to DOD would provide DOD a competitive advantage in the market place that would place other agencies at a disadvantage. It would also involve incremental costs that have yet to be estimated. Flexible approaches to shaping the workforce, such as 2-year excepted service appointments, may be helpful in avoiding long-term commitments for short-term requirements, addressing transition gaps, and smoothing outsourcing strategies. At the same time, these authorities represent tools that are not effective on their own, rather they are elements that need to be developed into an effective strategy and aligned with program goals and missions. The legislation could also allow DOD to revise Reduction-in-Force (RIF) rules to place greater emphasis on an employee’s performance. DOD has indicated that it will be considering for application DOD-wide, personnel practices that were identified in the April 2, 2003, Federal Register notice. This notice describes revised RIF procedures that change the order in which employees would be retained under a RIF order. Specifically, employees could be placed on a retention list in the following order: type of employment (i.e., permanent, temporary), level of performance, and veterans’ preference eligibility (disabled veterans will be given additional priority), which we note would reduce the order in which veterans’ preference is currently provided. While we conceptually support revised RIF procedures that involve much greater consideration of an employee’s performance, as I pointed out above, agencies must have modern, effective and credible performance management systems in place to properly implement such authorities. The proposed NSPS would allow the Secretary, after consultation with the Merit Systems Protection Board (MSPB), to prescribe regulations providing fair treatment in any appeals brought by DOD employees relating to their employment. The proposal states that the appeals procedures shall ensure due process protections and expeditious handling, to the maximum extent possible. In this regard, the proposal provides that presently applicable appeals procedures should only be modified insofar as such modifications are designed to further the fair, efficient, and expeditious resolution of matters involving DOD employees. This provision is substantially the same as a similar provision in the Homeland Security Act of 2002 allowing DHS to prescribe regulations for employee appeals related to their employment. As required of the Secretary of DHS, the Secretary of Defense would be required to consult with MSPB prior to issuing regulations. However, neither the Homeland Security Act nor the proposed legislation expressly requires that employee appeals be heard and decided by the MSPB. There is also no express provision for judicial review of decisions regarding employee appeals decisions. Given the transparency of the federal system dispute resolution and its attendant case law, the rights and obligations of the various parties involved is well developed. It is critical that any due process changes that are implemented after consultation with MSPB result in dispute resolution processes that are not only fair and efficient but, as importantly, minimize any possible perception of unfairness. The critical need for an institutional infrastructure to develop and support change has been a consistent theme raised throughout the observations I have been providing on some of the specific aspects of the proposed NSPS. This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the department’s human capital policies, strategies, and programs with DOD’s mission, goals, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair and merit-based implementation and application of a new system. Quite simply, in the absence of the right institutional infrastructure, granting additional human capital authorities will provide little advantage and could actually end up doing damage if the new flexibilities are not implemented properly. Our work looking at DOD’s strategic human capital planning efforts and our work looking across the federal government at the use of human capital flexibilities and related human capital efforts underscores the critical steps that DOD needs to take to properly develop and effectively implement any new personnel authorities. Our work here and abroad has consistently demonstrated that leading organizations align their human capital approaches, policies, strategies, and programs with their mission and programmatic goals. Human capital plans that are aligned with mission and program goals integrate the achievement of human capital objectives with the agency’s strategic and program goals. Careful and thoughtful human capital planning efforts are critical to making intelligent competitive sourcing decisions. The Commercial Activities Panel, which I was privileged to chair, called for federal sourcing policy to be “consistent with human capital practices designed to attract, motivate, retain, and reward a high performing workforce” and highlighted a number of human capital approaches to help achieve that objective. In April 2002, DOD published a strategic plan for civilian personnel. However, as we reported in March 2003, top-level leadership at the department and the component levels has not until recently been extensively involved in strategic planning for civilian personnel; however, civilian personnel issues appear to be a higher priority for top-level leaders today than in the past. Although DOD began downsizing its civilian workforce more than a decade ago, top-level leadership has not, until recently, developed and directed reforms to improve planning for civilian personnel. With the exception of the Army and the Air Force, neither the department nor the components in our March review had developed strategic plans to address challenges affecting the civilian workforce until 2001 or 2002, which is indicative of civilian personnel issues being an emerging priority. In addition, we reported that top-level leaders in the Air Force, the Marine Corps, the Defense Contract Management Agency, and the Defense Finance and Accounting Service have been or are working in partnership with their civilian human capital professionals to develop and implement civilian strategic plans; such partnership is increasing in the Army and not as evident in the Navy. Moreover, DOD’s issuance of its departmentwide civilian human capital plan begins to lay a foundation for strategically addressing civilian human capital issues; however, DOD has not provided guidance on aligning the component-level plans with the department-level plan to obtain a coordinated focus to carry out the Secretary of Defense’s transformation initiatives in an effective manner. High-level leadership attention is critical to developing and directing reforms because, without the overarching perspective of such leaders as Chief Operating Officers and the Chief Human Capital Officers, reforms may not be sufficiently focused on mission accomplishment, and without their support, reforms may not receive the resources needed for successful implementation. We have previously reported that the concept of a Chief Operating Officer (COO) could offer the leadership to help elevate attention on key management issues and transformational change, integrate these various efforts, and institutionalize accountability for addressing management issues and leading transformational change both within and between administrations. In our view, DOD is a prime candidate to adopt this COO concept. In addition, if Congress provides DOD with many of the flexibilities it is seeking under the NSPS, the basis for adding a COO position at DOD would be even stronger. Despite the progress that has been made recently, the DOD human capital strategic plans we reviewed, for the most part, were not fully aligned with the overall mission of the department or respective components, results oriented, or based on data about the future civilian workforce. For example, the goals and objectives contained in strategic plans for civilian personnel were not explicitly aligned with the overarching missions of the respective organizations. Consequently, it is difficult to determine whether DOD’s and the components’ strategic goals are properly focused on mission achievement. In addition, none of the plans contained results- oriented performance measures that could provide meaningful data critical to measuring the results of their civilian human capital initiatives (i.e., programs, policies, and processes). Thus, DOD and the components cannot gauge the extent to which their human capital initiatives contribute to achieving their organizations’ mission. Also, for the most part, the civilian human capital plans in our review did not contain detailed information on the skills and competencies needed to successfully accomplish future missions. Without information about what is needed in the future workforce, it is unclear if DOD and its components are designing and funding initiatives that are efficient and effective in accomplishing the mission, and ultimately contributing to force readiness. Lastly, the DOD civilian strategic plans we reviewed did not address how the civilian workforce will be integrated with their military counterparts or with sourcing initiatives. At the department level, the strategic plan for civilian personnel was prepared separately from corresponding plans for military personnel and not integrated to form a seamless and comprehensive strategy and did not address how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. For the most part, at the component level, the plans set goals to integrate planning for the total workforce, to include civilian, military, and contractor personnel. The Air Force and the Army, in particular, have begun to integrate their strategic planning efforts for civilian and military personnel, also taking contractor responsibilities into consideration. Without integrated planning, goals for shaping and deploying civilian, military, and contractor personnel may not be consistent with and support each other. Consequently, DOD and its components may not have the workforce with the skills and competencies needed to accomplish tasks critical to assuring readiness and achieving mission success. In our March report we recommended, among other things, that DOD improve future revisions and updates to the departmentwide strategic human capital plan by more explicitly aligning its elements with DOD’s overarching mission, including performance measures, and focusing on future workforce needs. DOD only partially concurred with our recommendation, and, as explanation, stated that the recommendation did not recognize the involvement in and impact of DOD’s Quadrennial Defense Review on the development of the departmentwide plan. We also recommended that DOD develop a departmentwide human capital strategic plan that integrates both military and civilian workforces and takes into account contractor roles and sourcing initiatives. DOD did not concur with this recommendation stating that it has both a military and civilian plan, and the use of contractors is just another tool to accomplish the mission, not a separate workforce with separate needs to manage. The intent of our recommendation is not to say that DOD has a direct responsibility to manage contractor employees, but rather to recognize that strategic planning for the civilian workforce should be undertaken in the context of the total force—civilian, military, and contractors—since the three workforces need to perform their responsibilities in a seamless manner to accomplish DOD’s mission. In commenting on our recommendations, the Under Secretary of Defense for Personnel and Readiness stated that DOD is in the early stages of its strategic planning efforts. We recognize this and believe that our recommendations represent opportunities that exist to strengthen its developing planning efforts. Our work has identified a set of key practices that appear to be central to the effective use of human capital authorities. These practices, which are shown in figure 1, center on effective planning and targeted investments, involvement and training, and accountability and cultural change. Congress should consider the extent to which an agency is capable of employing these practices before additional human capital flexibilities are implemented. In the context of NSPS, Congress should consider whether and to what extent DOD is using those practices. I have discussed throughout my statement today the importance of moving to a new human capital system which provides reasonable management flexibility along with adequate safeguards, reasonable transparency, and appropriate accountability mechanisms to prevent abuse of employees. In addition to the suggestions made above, Congress should consider requiring DOD to fully track and periodically report on its performance. This requirement would be fully consistent with those contained in our calendar year 2000 human capital legislation, which required us to comprehensively assess our use of the authorities granted to us under the act. More generally, Congress should consider requiring DOD to undertake evaluations that are broadly modeled on the evaluation requirements of OPM’s personnel demonstration program. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, cost and benefits, impacts on veterans and other EEO groups, adherence to merit principles, and extent to which the lessons from the project can be applied elsewhere, including governmentwide. This evaluation and reporting requirement would facilitate congressional oversight of NSPS, allow for any mid-course corrections in its implementation, and serve as a tool for documenting best practices and sharing lessons learned with employees, stakeholders, other federal agencies, and the public. DOD has stated that it would continue its evaluation of the science and technology reinvention laboratory demonstration projects when they are integrated under a single human capital framework. In summary, DOD’s civilian human capital proposals raise several critical questions. Should DOD and/or other federal agencies be granted broad- based exemptions from existing law, and if so, on what basis? Does DOD have the institutional infrastructure in place to make effective use of the new authorities? This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals and mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair, effective, and credible implementation and application of a new system. Many of the basic principles underlying DOD’s civilian human capital proposals have merit and deserve the serious consideration they are receiving here today and will no doubt be received by others in the coming weeks and months. However, the same critical questions should be posed to the DOD proposal. Should DOD and/or other federal agencies be granted broad-based exemptions from existing law, and if so, on what basis? In addition, Congress and DOD should carefully assess the degree to which DOD has the institutional infrastructure in place to make effective use the new authorities it is seeking. Our work has shown that while progress has been and is being made, additional efforts are needed by DOD to integrate its human capital planning process with the department’s program goals and mission. The practices that have been shown to be critical to the effective use of flexibilities provide a validated roadmap for DOD and Congress to consider. Finally, as I have pointed out in several key areas, Congress should consider, if the authorities are granted, establishing additional safeguards to ensure the fair, merit-based, transparent, and accountable implementation and application of NSPS. In our view, Congress should consider providing governmentwide broad banding and pay for performance authorities that DOD and other federal agencies can use provided they can demonstrate that they have a performance management system in place that meets certain statutory standards, which can be certified to by a qualified and independent party, such as OPM. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funds to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. This would serve as a positive step to promote high- performing organizations throughout the federal government while avoiding further fragmentation within the executive branch in the critical human capital area. This morning, I have offered some preliminary observations on some aspects of the proposal. However, these preliminary observations have not included some serious concerns I have with other sections of the proposed legislation that go beyond the civilian personnel proposal. My observations have included suggestions for how Congress can help DOD effectively address its human capital challenges and ensure that NSPS is designed and implemented in an effective, efficient, and fair manner that meets the current and future needs of DOD, its employees, and the American people. Human capital reforms at DOD obviously have important implications for national security and precedent-setting implications for governmentwide human capital management. Given the massive size of DOD and the magnitude of the nature and scope of the changes that are being considered, such reform at DOD also has important precedent-setting implications for federal human capital management generally and should be considered in that context. We look forward to continuing to support Congress and work with DOD in addressing the vital transformation challenges it faces. Madam Chairwoman and Mr. Davis, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information on human capital issues at DOD, please contact Derek Stewart, Director, Defense Capabilities and Management on (202) 512-5559 or at [email protected]. For further information on governmentwide human capital issues, please contact J. Christopher Mihm, Director, Strategic Issues, on (202) 512-6806 or at [email protected]. Individuals making key contributions to this testimony included William Doherty, Clifton G. Douglas, Jr., Christine Fossett, Bruce Goddard, Judith Kordahl, Janice Lichty, Bob Lilly, Lisa Shames, Ellen Rubin, Edward H. Stephenson, Jr., Tiffany Tanner, Marti Tracy, and Michael Volpe.
DOD is in the midst of a major transformation effort including a number of initiatives to transform its forces and improve its business operations. DOD's legislative initiative would provide for major changes in the civiliean and military human capital management, make major adjustments in the DOD acquisition process, affect DOD's organization structure, and change DOD's reporting requirements to Congress, among other things. DOD's proposed National Security Personnel System (NSPS) would provide for wide-ranging changes in DOD's civilian personnel pay and performance management, collective bargaining, rightsizing, and a variety of other human capital areas. The NSPS would enable DOD to develop and implement a consistent DOD-wide civilian personnel system. This testimony provides GAO's preliminary observations on aspects of DOD's legislative proposal to make changes to its civilian personnel system and poses critical questions that need to be considered. Many of the basic principles underlying DOD's civilian human capital proposals have merit and deserve serious consideration. The federal personnel system is clearly broken in critical respects--designed for a time and workforce of an earlier era and not able to meet the needs and challenges of our current rapidly changing and knowledge-based environment. DOD's proposal recognizes that as GAO has stated and the experiences of leading public sector organizations here and abroad have found strategic human capital management must be the centerpiece of any serious government transformation effort. More generally, from a conceptual standpoint, GAO strongly supports the need to expand broad banding and pay for performance-based systems in the federal government. However, moving to quickly or prematurely at DOD or elsewhere, can significantly raise the risk of doing it wrong. This could also serve to severely set back the legitimate need to move to a more performance and results-based system for the federal government as a whole. Thus, while it is imperative that we take steps to better link employee pay and other personnel decisions to performance across the the federal government, how it is done, when it is done, and the basis on which it is done, can make all the difference in whether or not we are successful. In our view, one key need is to modernize performance management systems in executive agencies so that they are capable of supporting more performance-based pay and other personnel decisions. Unfortunately, based on GAO's past work, most existing federal performance appraisal systems, including a vast majority of DOD's systems, are not currently designed to support a meaningful performance-based pay system. The critical questions to consider are: should DOD and/or other agencies be granted broad-based exemptions from existing law, and if so, on what bas; and whether they have the institutional infrastructure in place to make effective use of the new authorities. This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency's human capital policies, strategies, and programs with its program goals and mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and, importantly, a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms to ensure the fair, effective, and credible implementation of a new system. In our view, Congress should consider providing governmentwide broad banding and pay for performance authorities that DOD and other federal agencies can use provided they can demonstrate that they have a performance management system in place that meets certain statutory standards, which can be certified to by a qualified and independent party, such as OPM, within prescribed timeframes. Congress should also consider establishing a governmentwide fund whereby agencies, based on a sound business case, could apply for funding to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. This approach would serve as a positive step to promote high-performing organizations throughout the federal government while avoiding fragmentation within the executive branch in the critical human capital area.
PPACA provided for additional health care coverage options for millions of lower income individuals through the expansion of eligibility for the Medicaid program and the creation of health insurance exchanges where eligible individuals can qualify for federal subsidies to purchase private health insurance coverage. Medicaid. Medicaid is a joint federal-state program that provides health care coverage for certain low-income individuals. Under federal law, states historically have been required to cover certain categories of individuals under Medicaid (mandatory populations) with the flexibility to extend coverage to other defined groups (optional populations). PPACA provides for states to expand Medicaid coverage to most nonpregnant, nonelderly individuals with income that does not exceed 133 percent of the federal poverty level (FPL) beginning no later than January 1, 2014. The federal government will pay the full cost of covering newly eligible enrollees until 2017 at which point the federal share will begin to gradually decline to 90 percent by 2020. American Health Benefit Exchanges. These are marketplaces to be established by January 2014 to facilitate the purchase of private insurance coverage. Individuals obtaining insurance through the exchanges may qualify for federal subsidies (hereafter referred to as exchange subsidies) in the form of premium tax credits and cost sharing reductions. In order to qualify for the premium tax credit, individuals or families must meet certain criteria including income levels between 100 and 400 percent of the FPL and failing to qualify for other health care coverage, such as Medicare or Medicaid, or having access to affordable insurance of minimum value from an employer. The premium tax credit is refundable meaning that it can provide benefits to lower-income tax filers with little or no tax liability. The portion of the tax credit that reduces an individual’s tax liability is categorized as a reduction in federal revenues and the portion of credits that exceed an individual’s tax liability is categorized as an increase in outlays. Individuals who enroll in an exchange plan may also be eligible for additional cost-sharing subsidies that further reduce the out-of-pocket amount they would otherwise have to pay when accessing covered health services. While these changes, along with other PPACA provisions, are expected to increase federal health care spending, PPACA also included a number of provisions that aim to reduce the level of federal health care spending. For example, PPACA reduced payments both Medicare and Medicaid make to hospitals that serve a disproportionate share of low-income patients. This change reflects the expectation that PPACA’s major coverage expansions will result in significantly fewer uninsured hospital patients. Also, PPACA reduced estimated Medicare spending through changes to rates paid to Medicare Advantage organizations—Medicare’s private plan alternative to the original Medicare fee-for-service—to align Medicare Advantage payment rates more closely with spending on Medicare’s fee-for-service program. In addition, PPACA created a number of cost containment mechanisms designed to slow future growth of health care spending, such as: Productivity adjustments. PPACA seeks to restrain health spending growth by reducing the payment updates for many Medicare services for productivity gains. This is intended to provide a strong financial incentive for health providers to enhance productivity, improve efficiency, or otherwise reduce their costs per service. Independent Payment Advisory Board (IPAB). PPACA called for the creation of a 15-member board to make recommendations, with certain restrictions, for reducing the costs of Medicare when per capita Medicare growth exceeds specified targets beginning in 2015. These recommendations are automatically implemented unless overridden by lawmakers. PPACA also incorporated certain tax provisions designed to generate revenue. Beginning in January 2013, PPACA imposed an additional Medicare Hospital Insurance tax on wages, compensation, and self- employment income in excess of threshold amounts, defined as $200,000 for individuals, $250,000 for spouses filing jointly, and $125,000 for spouses filing separate returns. PPACA also imposes an excise tax on high cost employer-sponsored health plans beginning in 2018. Employer- sponsored plans with a benefit value exceeding specified thresholds will generally be subject to a 40 percent excise tax. levied on insurers but is expected eventually to be passed on to their customers. CBO, OACT, and other observers expect that the excise tax will create an incentive for employers to reduce the scope of their health benefits and, therefore, the demand for health care services. These threshold levels are generally $10,200 for individuals and $27,500 for families in 2018, adjusted in 2019 by growth in the Consumer Price Index plus 1 percentage point and by growth in the Consumer Price Index thereafter. 2010. In these reports, the Trustees, CBO, and OACT all expressed concerns about whether certain cost-containment mechanisms included in PPACA can be sustained over the long term. CBO and OACT both produced alternative projections that assume certain cost-containment mechanisms are not fully maintained over the long term. We first incorporated the Trustees’, CBO’s, and OACT’s projections of the effects of PPACA on federal health care spending in our Fall 2010 update of the long-term fiscal outlook. The effects of PPACA are primarily seen through changes to assumptions for the following health care programs: Medicare. Medicare spending in our Baseline Extended simulation follows CBO’s baseline projections for the first 10 years, which follow current law and assume that reductions in Medicare physician payment rates unrelated to PPACA occur as scheduled; thereafter, Medicare spending is based on the Trustees’ intermediate projections.certain cost-containment mechanisms intended to slow the growth of health care cost enacted in PPACA were sustained over the long term. In the Alternative simulation, Medicare spending is based on OACT’s alternative scenario, which assumes that reductions in Medicare physician rates do not occur as scheduled under current law and, starting in our Fall 2010 update, that certain cost-containment Beginning in our Fall 2010 update, we assumed that mechanisms enacted in PPACA intended to slow the growth of health care cost begin to phase out after 2019. Our Fall 2010 simulations also reflect changes in the level of Medicare spending resulting from other provisions of PPACA, such as reductions to payment rates for Medicare Advantage organizations. Medicaid, Children’s Health Insurance Program (CHIP), and exchange subsidies. In both the January 2010 and Fall 2010 Baseline Extended simulations, spending for Medicaid and exchange subsidies follows CBO’s baseline projections for the first 10 years and is then based on growth in spending for these programs consistent with CBO’s long-term assumptions for the number and age composition of enrollees and the Medicare Trustees’ intermediate assumptions for excess cost growth. Prior to Fall 2010, federal spending for CHIP was included in our simulations under other mandatory spending. Starting in Fall 2010, consistent with CBO, we include federal spending for CHIP along with subsidies for the newly created health insurance exchanges in a single category with Medicaid. Our Fall 2010 Alternative simulation assumes that provisions in current law designed to limit the growth in spending on exchange subsidies are not maintained over the long term. Several provisions of PPACA affected federal revenue, including an excise tax on high cost employer-sponsored health plans and increase in the Medicare Hospital Insurance tax for higher income individuals and families. While the effects of these provisions are incorporated into our simulations in the first 10 years, we do not make assumptions about the composition of revenue over the long term in our simulations. The Baseline Extended simulation follows CBO’s baseline projections, which generally reflect current law, for the first 10 years and then holds revenue constant as a share of GDP. As a result, over the long term, revenue as a share of GDP is higher in the Baseline Extended than historical averages. In the Alternative simulation, expiring tax provisions are generally extended and the alternative minimum tax (AMT) amount is indexed to inflation. Revenue in the Alternative simulation is then held at the 40-year historical average. This assumption implies that, consistent with past experience, legislation will be enacted to offset some of the increases in revenue scheduled in current law. Both simulations follow CBO’s projections for Social Security for the first 10 years and the Trustees’ intermediate projections thereafter. See appendix I for more information on the assumptions used in the simulations and a description of technical changes that were made for this report. The effect of PPACA on the long-term fiscal outlook depends largely on whether elements designed to control cost growth are sustained. Overall, there was notable improvement in the longer-term outlook after the enactment of PPACA under our Fall 2010 Baseline Extended simulation, which, consistent with federal law at the time the simulation was run, assumed the full implementation and effectiveness of the cost- containment provisions over the entire 75-year simulation period. In contrast, the long-term outlook in the Fall 2010 Alternative simulation worsened slightly compared to our January 2010 simulation. This is largely due to the fact that cost-containment mechanisms specified in PPACA are assumed to phase out over time while the additional costs associated with expanding federal health care coverage remain. Figure 1 shows that while the steps taken in PPACA to restrain spending on the federal health programs were significant, they were not sufficient to prevent an unsustainable increase in debt held by the public even under the more optimistic assumptions in our Baseline Extended simulation. The net effect of changes to spending and revenue on the federal budget were relatively small in the first few decades in both simulations, and the improvements in the Baseline Extended simulations from January 2010 to Fall 2010 do not significantly slow the growth in debt held by the public until the outyears. Debt as a share of GDP still reached the historical high of 109 percent by 2036 in the Fall 2010 Baseline Extended simulation— just 1 year later than it did in the January 2010 Baseline Extended simulation. There was no change in the date when debt held by the public reached the historic high from the January 2010 Alternative simulation to the Fall 2010 Alternative simulation. The effect of PPACA on the long-term fiscal outlook is seen largely through changes in federal spending on major federal health care programs. Figure 2 shows that federal spending on Medicaid, CHIP, and exchange subsidies increased in the Baseline Extended simulation, reflecting expanded eligibility and coverage. By 2035, spending on Medicaid, CHIP, and exchange subsidies in the Fall 2010 Baseline Extended simulation equaled 3.3 percent of GDP—0.7 percentage points higher than Medicaid spending in the January 2010 Baseline Extended simulation—and continued to grow thereafter. Spending on Medicare declined substantially in our Fall 2010 Baseline Extended simulation, reflecting the assumption of full implementation and effectiveness of the cost-containment mechanisms in PPACA. Spending on Medicare, for example, decreased 1.5 percentage points from 6.2 percent of GDP in 2035 in the simulations run before PPACA was enacted to 4.7 percent in the simulations run immediately after enactment. The difference between our January 2010 and Fall 2010 simulations widens in subsequent decades as the effects of slower growth in Medicare spending compound over time. Given the large role of Medicare spending in the federal budget, slowing growth of spending on the program would reduce, though not eliminate, the pressure federal health care spending is expected to put on the rest of the federal budget in coming decades. The Trustees, CBO, and OACT have questioned whether the cost- containment mechanisms enacted in PPACA can be sustained over the long term, due in part to the challenges in sustaining increases in health care productivity. Prior to PPACA, payment updates for many Medicare services were based on the prices of goods and services, such as medical equipment and labor, needed to serve patients. PPACA required that these payment updates be reduced by a productivity adjustment, defined as a 10-year average of changes in annual economy-wide private productivity. This is expected to provide a strong financial incentive for health providers to enhance productivity, improve efficiency, or otherwise reduce their costs per service. The lower payment rate updates to most categories of Medicare providers specified under PPACA have only begun to be implemented. It remains unclear what actions providers will take to improve their productivity and reduce unnecessary expenditures in response to these lower payment rate updates. According to OACT, however, health care productivity gains have historically been small due to such factors as the labor-intensive nature of the industry and the individual customization of treatments in many cases. Consequently, OACT said this makes it unlikely that actual health provider productivity will be equal to the economy as a whole over sustained periods. PPACA created a number of research and development initiatives––such as bundling Medicare payments for services that patients receive across a single episode of care and establishing the Medicare Shared Savings Program through which accountable care organizations can better manage and coordinate care across different settings—that have the potential to transform the health care payment and delivery system in ways that reduce federal health care spending consistent with the productivity adjustments. However, these initiatives are only just beginning to be tested. Accordingly, it is too early to know which will result in lasting changes and what effect they will have on future federal spending. The role of IPAB in controlling cost growth is assumed to be limited under current law projections given that the productivity adjustments and other provisions contained in PPACA are estimated by the Trustees to keep Medicare spending below the targeted growth rate in all but 1 year. Absent the full and effective implementation of productivity adjustments, IPAB’s task would be more daunting. It is not possible to predict at this time, however, what changes IPAB will propose to keep Medicare spending within the specified target and what the disposition of the recommendations will be. In our Fall 2010 Alternative simulation based on the CMS OACT’s alternative scenario, physician payment rates grew with inflation (using the Medicare Economic Index), as opposed to the 0 percent physician fee schedule update assumed in January 2010, which resulted in higher spending. This offset some of the reductions in spending resulting from the cost containment mechanisms enacted in PPACA. subsidies.exchange subsidies equaled 3.5 percent of GDP in 2035—or roughly 0.9 percentage points higher than in the January 2010 Alternative simulation. As a result, total federal health care spending was higher in the Fall 2010 Alternative simulation than in the January 2010 simulation. Our simulations provided two scenarios based on broad sets of assumptions about health care spending and other components of federal spending and revenue. Long-term projections, however, are inherently uncertain and future health care costs in particular are difficult to estimate. This uncertainty, which predates the enactment of PPACA, increases the further the model looks out into the future. While some of this uncertainty is related to the implementation and effectiveness of provisions of PPACA, there is also broader uncertainty about the future underlying rate of health care cost growth before cost-containment mechanisms are applied. The projected rate of growth largely depends on the assumptions used. To examine these assumptions, we divided spending growth into two types of drivers: (1) enrollment in the major federal health care programs and (2) growth in health care spending per capita. While both have contributed to the growth in federal health care spending over the past several decades, their relative role in explaining rising future federal health care spending differs over time. Spending on both Medicare and Medicaid has increased in the past several decades due in part to a steady increase in the number of enrollees. In calendar year 1970, approximately 9 percent of the U.S. population was enrolled in Medicare. As the U.S. population has aged and more people have enrolled in the program, this increased to approximately 15 percent in calendar year 2011. Medicaid enrollment, while more volatile than Medicare enrollment, has also generally increased as states have decided to expand eligibility and economic recessions have increased the number of people eligible. For example, in fiscal year 1970, approximately 7 percent of the U.S. population was enrolled in Medicaid. This increased to approximately 17 percent in fiscal year 2010 (the most recent year historical data is available). However, there have been periods when enrollment did not grow. For example, in the 1990’s, strong economic growth and the move from Aid to Families With Dependent Children to the Temporary Assistance for Needy Families block grant, which was designed to help needy families reduce their dependence on federal assistance, helped keep enrollment steady at approximately 12 percent of the population between fiscal years 1995 and 2000. Enrollment in the major federal health care programs is expected to continue to increase in the near term due both to the aging of the U.S. population and to expanded eligibility. Consequently, increasing enrollment is expected to be the most important driver of federal health care spending over the next couple of decades. Future enrollment trends for Medicare, particularly in the near term, are reasonably clear. The Trustees expect a large increase in enrollment in Medicare between 2010 and 2030 as the baby boom generation reaches age 65 and are eligible to receive benefits. As figure 6 shows, the number of baby boomers turning 65 is projected to grow in coming years from an average of about 7,600 per day in 2011 to more than 11,000 per day in 2029. Future enrollment patterns for Medicaid and the exchange subsidies are less clear due both to the uncertainty about future policy changes and to other factors such as income growth that affect individuals’ eligibility. Medicaid. In its March 2012 projections, which assumed states will expand Medicaid coverage to all eligible individuals as provided in PPACA, CBO estimated that enrollment in Medicaid would increase from roughly 54 million people in fiscal year 2011 (or roughly 17 percent of the population) to 75 million by fiscal year 2022 (or roughly 22 percent of the population). This includes roughly 17 million nonelderly people projected to be enrolled in the program in 2022 as a result of expanded coverage provided by PPACA. The people who will be newly eligible for Medicaid under PPACA consist primarily of nonelderly adults with low income along with a smaller number of children from low income households. According to OACT, both groups are expected to be less costly to cover on a per enrollee basis than current enrollees. In March 2012, CBO estimated that expanding Medicaid coverage and CHIP coverage as provided for in PPACA would increase federal spending by $136 billion in 2022. CBO has since updated its estimates to reflect the June 2012 U.S. Supreme Court decision on PPACA. PPACA, as enacted, required states to extend Medicaid to most nonpregnant nonelderly individuals up to 133 percent of the FPL and provided states with an enhanced federal match for this newly eligible population. States that fail to cover mandatory Medicaid populations are at risk of losing federal match for their entire Medicaid program. The Supreme Court subsequently ruled that states that choose not to expand Medicaid eligibility to these newly eligible individuals will only be subject to a penalty of forgoing the enhanced federal matching funds associated with covering this population rather than foregoing federal matching funds for their entire program. States therefore have the option of deciding whether to expand Medicaid coverage to newly eligible populations as provided by PPACA. CBO notes that what states will decide to do regarding the Medicaid expansion under PPACA is highly uncertain. States face both financial incentives and disincentives to participate in the Medicaid expansion. On the one hand, the federal government will cover a large share of the costs of the expansion. On the other hand, states would ultimately have to bear some costs during a period when their budgets are already under pressure, in part from the rising costs of the existing Medicaid program. Exchange subsidies. In projections prepared prior to the Supreme Court ruling, CBO estimated the exchanges would subsidize health insurance coverage for 22 million nonelderly people by fiscal year 2022 and increase federal spending by $127 billion in that year. Following the Supreme Court ruling, CBO revised this estimate anticipating that a portion of the people will not be eligible for Medicaid as a result of states choosing not to expand their Medicaid programs and will instead be eligible for federal subsidies for coverage offered through the exchanges. As a result, CBO increased its estimates of the cost of exchange subsidies. However, as noted earlier, it remains uncertain how the states will respond to the Supreme Court’s ruling. Further, CBO notes some people will find the exchange subsidies less attractive than Medicaid because of the higher out-of-pocket costs they will face in the exchanges. There is also uncertainty about the extent to which private employers might choose to drop health insurance coverage and shift workers to the exchanges. Spending on major federal health programs is affected not just by the number of enrollees but also by the age composition and health status of the enrollees. Elderly individuals, for example, typically have higher health care costs than younger individuals and very elderly individuals, those 85 or older, typically have the highest costs. For Medicare enrollees 85 or older, spending in 2008 was more than $13,000 per enrollee compared to about $7,600 for enrollees ages 65 to 74. Similarly, Medicaid’s spending varies considerably among different type of enrollees. Children and adults under the age of 65 account for almost 75 percent of Medicaid’s enrollees, but have much lower per capita costs than the aged (those 65 or older) or disabled. For example, in fiscal year 2010, Medicaid spent approximately $3,000 per child and $4,000 per adult under age 65, compared to approximately $15,000 and $17,000, for each aged beneficiary and each disabled beneficiary, respectively. Medicaid already has a large role in funding long-term care, such as nursing homes, for aged persons. The increase in the number of people 85 or older in the next 10 years is expected to have a major effect on long-term care spending for Medicaid. As such, a key driver of federal spending for both Medicare and Medicaid is the aging of the population. Enrollment from this population did not change as a result of PPACA. The share of the federal budget devoted to Medicare and Medicaid has increased over the past several decades due not only to increases in enrollment but also due to increases in health care spending per enrollee. The extent to which the annual growth rate of health care spending per capita exceeds the annual growth rate of potential GDP per capita adjusted for demographic characteristics, is referred to as excess cost growth. Over the last 35 years, excess cost growth averaged around 2 percent but has fluctuated during this time period. Excess cost growth slowed for Medicare, for example, after the introduction of a prospective payment system in fiscal year 1984 in which Medicare pays a predetermined rate for each hospital admission—rather than simply reimbursing providers for costs, which provides little incentive for efficiency. Excess cost growth also slowed in the 1990s as enrollment increased in managed care plans. However, it is not clear to what extent these slowdowns represent one-time downward shifts in health care costs or more permanent changes in the underlying growth rate. Overall excess cost growth in the United States is thought to have returned closer to the historical average in the 2000s. Excess cost growth leads to an ever-growing share of the nation’s income being spent on health care, crowding out spending on all other goods and services. Going forward, CBO and the Trustees both assume that excess cost growth will decrease over time because of the financial pressure health care spending is putting on the federal government, states, businesses, and households. How and when this transition takes place, however, is highly uncertain. Figure 7 shows that varying the excess cost growth assumption in our simulations dramatically alters the share of national income needed to fund federal health care spending. Under the standard set of assumptions for health care spending in the Baseline Extended simulation, excess cost growth averages 0.2 percentage points for Medicare and 0.8 percentage points for Medicaid, CHIP, and exchange subsidies over the long term. Under these assumptions, spending on these programs would rise from less than 5 percent of GDP in 2012 to more than 9 percent in 2050. If excess cost growth averaged 2 percent per year after 2022—the average rate between 1975 and 2010— federal health spending in our Baseline Extended simulation would rise quickly and would account for more than 13 percent of the entire U.S. economy by 2050. Even with lower assumptions about excess cost growth, a growing share of national income would be needed to fund federal health programs. Under the 0-percent excess cost growth scenario, spending on Medicare, Medicaid, CHIP, and exchange subsidies would continue to grow as a share of GDP due to the aging of the population and other enrollment and demographic trends described earlier. In 2050, spending on the major federal health care programs would be 8 percent of GDP and gradually increase thereafter. At the end of the 75-year simulation period, spending on Medicare, Medicaid, CHIP, and exchange subsidies in the 0-percent excess cost growth scenario would be higher than at the beginning of the scenario in fiscal year 2022, but still below the levels shown in our standard Baseline Extended and Alternative simulations. Figure 8 shows that slowing the rate of excess cost growth could slow the buildup of debt held by the public considerably and help put the budget on a more sustainable path. Assuming revenue and nonhealth spending follow the assumptions in the Baseline Extended simulation and excess cost growth for health care averages 2 percentage points each year, debt held by the public would be more than 170 percent of GDP in 2050. Assuming 0 percent excess cost growth after 2022—an outcome that has not been sustained for any extended length of time over the past several decades—debt held by the public would be roughly 91 percent of GDP in 2050 in the Baseline Extended simulation. Debt held by the public would continue to slowly increase thereafter largely because of the interest costs of financing the federal government’s accumulated debt and increasing enrollment in federal health programs. Figure 8 also shows that slowing health care cost growth is insufficient to close the imbalance between spending and revenue in the Alternative simulation in the next few decades. In this simulation, revenue and spending follow historic trends and past policy preferences. Even assuming 0 percent excess cost growth after 2022, debt held by the public rises steeply in the Alternative simulation, reaching more than 100 percent of GDP (or the size of the total economy) by 2025, and continuing to grow at a rapid rate thereafter. This demonstrates that significant policy changes beyond those designed to control health care cost growth would need to be taken in the near term to put federal debt on a more sustainable path. Simulations based on broad assumptions about future excess cost growth such as these are helpful for illustrating how different rates of growth in spending per capita would affect future federal spending on health care. However, the simulations do not provide insight into the underlying factors driving growth in health care cost per capita. The major federal health care programs are highly integrated with the rest of the health care system and influenced not only by policies and laws, but also by future demographic and economic trends; the development and deployment of medical technology; the cost and availability of insurance; and the responses of health care providers, consumers, and policymakers to these trends. As policymakers consider how to put the federal government on a more sustainable path, it will be important to understand what the specific factors driving cost growth are, how they are interrelated, and how changes in these factors could affect federal health care spending. A growing U.S. population directly increases overall health care spending; however, the causes of rising health care cost per capita are more difficult to identify. Per capita health care spending grew at an average of 4.9 percent per year between 1965 and 2005, while per capita GDP grew at an average of 2.1 percent per year. There is general agreement among researchers about the factors that drive health care cost growth and the relative size of influence of this growth, although each factor has a unique mechanism to affect health care costs, and therefore, a different relative influence on health care cost growth (see fig. 9). Technological change (36 to 65 percent): Technological change affecting health care cost growth may take many forms. CBO defines technological advances as changes in clinical practice that enhance the ability of providers to diagnose, treat, or prevent health problems. Examples of technological advances include new drugs, devices, procedures, and therapies, as well as new applications of existing technologies. While not all new technologies increase health care costs, technological change as a whole has been the dominant cause of increases in health care spending. The effect of technological change on health care costs may depend, in part, on the type of treatment to which the new technology is applied. Cutler describes the following classes of treatment and their relative costs: Nontreatment applies to diseases that cannot be treated, such as end-stage cancers, and thus have a relatively low cost of medical care. Disease management refers to halfway technologies that can improve quality of life when cure or prevention is not possible. Disease management, such as dialysis for end-stage renal disease, is often very expensive. Prevention and cures for disease may have low marginal costs when they are available; however, preventative therapies are often provided to an entire population, and to the extent that new cures are more effective and cheaper than older treatments, demand for new cures may increase significantly. Thus, even when the unit price of new preventative therapies and cures are low, large quantities provided may increase overall spending for these treatments. In general, a technological change that enables providers to treat a previously untreatable disease will increase health care spending, while expanding disease management or shifting disease management to prevention or cure can lead to either increased or decreased health care spending. However, the introduction of new treatments and technologies may result in increased health care spending due to the possibility that health complications may arise from a new treatment, or that patients survive one disease long enough and eventually are diagnosed with an additional disease with additional treatment cost. It should be noted that a complete assessment of health care spending for new technologies should also consider the value, often measured by improved health functioning; increased life expectancy; or improved economic productivity produced by those technologies. For example, Cutler and McClellan found that increases in health care costs due to technological changes in the treatments for heart attacks, low-birthweight infants, depression, and cataracts was more than offset by increased life expectancy and improved productivity made possible by improved health. They also concluded that the value of increased longevity per person between 1950 and 1990 was larger than the increase in per capita health care spending over the same period. Chandra and Skinner assess technological change by categorizing innovations based on their health care productivity, or the improvement in health outcomes, such as longevity or health functioning, per dollar increase in cost. The first category includes highly productive treatments, which may be inexpensive, such as aspirin and beta-blockers, or expensive, such as anti-retroviral drugs for treating people with HIV/AIDS. The second category includes treatments with substantial benefits for some patients, but a diminished benefit for others. For example, heart attack patients treated within 12 hours of a heart attack receive large benefits from angioplasty and placement of a stent; however, the benefits for patients with stable angina, chest pain or discomfort, are less clear. The final category includes treatments with little benefit or scientific evidence. Treatments in this category are more likely to be focused on treatment for chronic conditions. Chandra and Skinner find that much of the improvement in health is generated by treatments in the first category, while much of the cost is generated by treatments in the third category, and therefore conclude that health insurance interacts with technological change to drive health care cost growth as health insurance provides access to new technologies for patients who may experience little health benefit. Increases in income (5 to 36 percent): As personal income increases, people demand more and better goods and services, including health care. This means that holding other factors constant, as higher personal income increases the quantity and quality of care demanded, overall health care spending increases as well. GDP is a good indicator of the effect of increasing income on health care spending. When GDP is growing, many Americans experience increases in income and will demand more health care services. When the rate of GDP growth declines, such as during the recent recession, health care spending growth may slow down; however, the magnitude of impact on health care spending may be smaller than compared to periods of higher GDP growth due to the persistent relationship of increasing income leading to the production of new technologies. To add further context to this relationship, the income elasticity for health care services, that is the magnitude of the association between income and demand for health care services, may vary across households and over time. While there are a variety of assessments of the effect of income on health care expenditures, historical data from the United States suggest that for a 10 percent increase in income, health expenditures will rise between 2 and 4 percent. Incorporating data from other countries to estimate the relationship at the national level and including the effect of other factors affecting health spending that are correlated with real per capita GDP raises the increase in health care expenditures to about 14 percent. Health insurance expansions (10 to 13 percent): The expansion of health insurance increases health care cost per capita as people demand more health care when they are better insured. Health insurance has expanded in two ways: (1) by covering an increasing share of the population and (2) by covering each person more completely. Both of these pathways decrease the out-of-pocket expenses that beneficiaries pay through deductibles and cost-sharing, which have declined as a share of overall health care spending. These two pathways help explain how health care costs may be affected when considering different types of health insurance. A recent study found that having Medicaid insurance in Oregon increased the likelihood of any hospitalization by 30 percent compared to having no insurance.Older research from the Rand health insurance experiment suggests that total per capita expenditures increased by about 30 percent for beneficiaries receiving free care compared to those in a plan similar to current high-deductable insurance plans. Furthermore, the relative comprehensiveness of coverage and out-of- pocket expenses differs by insurer category, such as a private insurer, Medicare, or Medicaid. Therefore, health care spending may increase when people switch to a more comprehensive type of health insurance coverage, such as switching from Medicaid to private health insurance. Health care price inflation (10 to 19 percent): Health care price inflation contributes to health care cost growth; however, the precise impact of health care price inflation on overall health care cost growth is not known. Unlike many markets, prices in the health care market are difficult for consumers to discern and therefore, infrequently used to determine which provider to see or which service to undergo, when options are available. While there may not be a strong or direct influence from competition on price inflation, there are indirect mechanisms in both the public and private health insurance markets. In the private health insurance market, some consumers comparison shop for health insurance plans—or employers comparison shop on their behalf—and insurance plans use contracts with providers to restrict the prices charged for services provided. The extent to which this mechanism limits cost growth varies by insurance plan type and the incentives each plan type imposes to limit health care costs. For example, fee-for-service plans generally pay providers a set amount to provide a specific service and therefore provide little or no incentive to limit costs; however, plans offered by health maintenance organizations may limit health care costs by encouraging the efficient provision of health care services through mechanisms such as capitated payments and utilization review, as well as through contracts with low-cost providers or those offering discounted rates. In the market for public health insurance, various strategies by federal and state governments to restrict inflation have been used over time, including state certificate of need requirements and prospective payment systems for Medicare services. Increases in administrative expenses (7 to 13 percent): The cost of administering health care has several sources, and has proven difficult to identify. Despite the difficulty in estimating administrative expenses, economists generally agree that they contribute to health care cost growth. Increases in administrative expenses may be due to the more complex and changing structure of the insurer and provider relationship. As a result, increased effort and new technology to deal with coding and filing claims, billing, and maintaining medical records may have increased administrative expenses. Aging (2 to 7 percent): The relative aging of the U.S. population contributes to increasing health care costs. An increasing share of the population that is older increases average health care costs per capita due to the additional medical care older Americans generally require. While an aging U.S. population has increased health care costs overall, the contribution of aging has been relatively small. Changes in the amount of defensive medicine and supplier-induced demand: Because the clinical value of a medical service may vary by patient and categorization of a service as defensive or supplier-induced depends on intent, it is difficult to identify these services. Defensive medicine and supplier-induced demand were either not included or found to be zero in the macro-studies determining the relative influence of various factors on health care cost growth presented in figure 9; however, several studies on specific procedures show that they do contribute to increasing health care costs. CBO reported that enacting certain tort reform proposals designed to limit defensive medicine would have reduced national health care spending by $11 billion in 2009, or 0.5 percent of health care expenditures, through decreased medical liability insurance premiums and lower utilization of health care services. While several studies show evidence of supplier-induced demand for particular services, including imaging services and procedures provided at physician-owned specialty hospitals, no study characterizes the overall impact of supplier-induced demand on health care cost growth. Many of the factors listed previously may not affect health care cost growth independently; instead they have combined effects through interactions in the health care market. For example, researchers believe that the influence of technological change on health care spending has been facilitated by higher historical levels of fee-for-service insurance, which incorporates less utilization review compared to managed care, and by periods of increasing per capita income which is associated with increased demand for new technologies. Although there is some consensus among researchers about which factors drive health care cost growth, there is considerable uncertainty about the magnitude of impact of each factor on future health care cost growth. Population growth is relatively predictable and barring a pandemic or similar catastrophic event, is not likely to contribute much uncertainty to health care cost projections. More uncertainty is likely to be associated with factors influencing health care spending per capita, particularly technological change, given its varied pathways of influence on health care cost growth. The following is our analysis of the relative uncertainty associated with factors influencing health care spending per capita. Technological change: While analysis of the number and types of medical technologies that are expected to be introduced in the next few years may yield some information about the range of possible impact of technology on health care spending per capita, the large number of sources of technological change makes this cost driver the most uncertain for estimating future health care costs. Much of this uncertainty is due to the unknown costs and effectiveness of changes in clinical practice—such as the introduction of new pharmaceutical drugs, medical devices, diagnostic tests, and procedures to treat disease—while the development and incorporation of nonclinical technologies—such as health information technology—also contribute to the uncertainty of future health care costs. Moreover, the development of new medical technology is influenced by future health insurance expansions and increases in income, further reducing the predictability of the impact of technological change on future health care costs. Increases in income: Based on expectations of future GDP growth and changes in the distribution of income among Americans, the influence of increases in income on health care cost growth is somewhat uncertain in the near future and likely to increase in uncertainty in the long-run. Because increasing personal income generally increases demand for health care services, the extent to which future increases in personal income will affect health care cost growth can be approximated using expectations of future growth in aggregate income, as measured by GDP, and changes in how that income is distributed. If a given increase in GDP is associated with an increase in income for a larger proportion of Americans, then the increase in GDP will generate a larger increase in health care cost growth. Together, the expected volatility in future GDP growth and possible changes in the distribution of income among Americans leads us to believe that there is some uncertainty surrounding the size of the impact that increases in income may have on future health care cost growth, and that the uncertainty is larger for the more distant future. Health insurance expansions: The expansion of health insurance may have some associated uncertainty for future health care costs per capita. This uncertainty is likely to have a lower bound of maintaining current insurance levels and an upper bound based on increasing both the number of insured Americans and the depth of coverage for each person. While future scenarios with decreasing insurance levels are possible, current policy debate is focused on increasing insurance levels. Possible increases in health insurance include the expansion of private insurance and Medicaid to cover the approximately 49 million Americans uninsured as of 2011, which would remove barriers to health care and increase health care spending. Health care price inflation: Changes in the structure of payment models and health insurance plan types will help limit health care price inflation, likely resulting in relatively little uncertainty in the amount of future health care cost growth caused by health care price inflation. Health care prices have been increasing steadily between 3 and 5 percent annually in recent decades. While some factors, such as consolidation of providers and integration of provider types, can reinforce price inflation, insurance plan types and insurance models have also been evolving to incorporate new methods of restraining health care cost growth. The trend of shifting toward more capitated payment, rather than fee-for-service, may continue as new capitated payment models are introduced and more beneficiaries switch to managed care health plans. In addition, the introduction of groups of providers organized into accountable care organizations may also restrain health care cost growth through payment models designed to promote quality care and coordinate care. Increases in administrative expenses: While it is difficult to assess the uncertainty associated with administrative expenses, it is clear that administrative expenses represent a relatively small portion of overall health care spending, and thus are likely to impose a relatively small amount of uncertainty on health care cost projections. The largest near- term change in administrative expenses is likely to be from increased use of electronic medical records, which may have large initial implementation costs, but also may decrease administrative expenses over the long-term. Aging: Similar to population growth, the aging of the U.S. population is relatively predictable; therefore, aging is not likely to produce much uncertainty for long-term health care projections. Changes in the age profile of the United States may affect health costs per capita through relatively unlikely events such as a pandemic or other catastrophic event, or a major technological breakthrough that significantly affects life expectancy. Changes in the amount of defensive medicine: Because it is difficult to separate health care services provided due to defensive medicine from those that would have been otherwise provided, it may be also somewhat difficult to affect change on the influence of these factors on health care spending. There are legislative proposals to limit the ability of individuals to bring medical malpractice actions, which are designed to limit defensive medicine spending; however, the uncertainty associated with health care spending due to these factors is likely to be small. Changes in the amount of supplier-induced demand: It is also difficult to identify services provided due to supplier-induced demand, and therefore difficult to estimate the uncertainty associated with supplier- induced demand. However, some changes in supplier-induced demand may impact future health care cost growth. For example, PPACA’s limiting of the expansion of physician-owned hospitals, and increased use of prospective and bundled payments may improve productivity and limit supplier-induced demand because the payment structure gives providers an incentive to find ways to treat patients more efficiently. Increases in direct-to-consumer advertising and consumer information: Uncertainty associated with increases in direct-to-consumer advertising and consumer information is relatively unknown as the phenomenon is quite new and its impact on future health care cost growth depends on many factors, including advertising regulations and physician attitudes toward patient requests. Comparing the results of our simulations before and after the enactment of PPACA helps to illustrate the important role that efforts to slow the growth in health care spending have in improving the long-term fiscal outlook. These efforts will require a sustained commitment and understanding of the key factors affecting health care cost and how they interrelate. Reducing health care cost growth alone, however, is not sufficient to put the federal budget on a sustainable path. Even in simulations assuming health care cost growth can be constrained for an extended period, our simulations show debt held by the public rising as a share of GDP over time, particularly assuming historical trends and policy preferences for revenue and other spending continue. Therefore, more needs to be done to change the fiscal path. This will likely require difficult decisions about both federal spending and revenue. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 19 days from the report date. At that time, we will send copies of this report to interested congressional committees and other interested parties. We will also make copies available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Susan J. Irving at (202) 512-6806 or [email protected], or James C. Cosgrove at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We run two simulations showing federal deficits and debt under different sets of policy assumptions. Our Baseline Extended simulation illustrates the long-term outlook assuming current law is generally continued, while the Alternative simulation illustrates the long-term outlook assuming historical trends and policy preferences are continued. In the Baseline Extended simulation, we closely follow the Congressional Budget Office’s (CBO) 10-year baseline budget projections, which incorporate the assumption that current law remains in effect. Revenue and spending other than interest on the debt and large mandatory programs such as Social Security and Medicare are then held constant as a share of gross domestic product (GDP). Under current law, revenue as a share of GDP would increase over time because of several factors, including the expiration of tax provisions; “real bracket creep,” wherein the growth of real income causes a greater proportion of taxpayers’ income to be taxed in higher brackets and be subject to the alternative minimum tax (AMT); and increased retirement income subject to taxation upon withdrawal (i.e., deferred taxes). However, history suggests that legislation will be enacted to offset such increases in revenue. In the Alternative simulation, expiring tax provisions are extended and the AMT exemption amount is indexed to inflation in the near term. Discretionary spending in the Alternative simulation grows with the economy in the first 10 years unless specific limits are specified in law. Over the long term, discretionary spending and revenue are held constant at or near their 40- year historical average share of GDP. Long-term spending on Social Security and Medicare in the Baseline Extended simulation is based on the Social Security and Medicare Trustees (Trustees) intermediate projections, which follow current law. Spending on Medicare in the Alternative simulation for all years is based on the Centers for Medicare & Medicaid Services’ Office of the Actuary’s (OACT) illustrative alternative projections, which deviate from current law. In these projections, for example, Medicare physician payment rate updates are adjusted to reflect the fact that in most years, Congress has acted to override reductions that would occur under current law. In both simulations, we assume that Social Security and Medicare benefits will continue to be paid even after the Federal Old-Age and Survivors Insurance and Federal Disability Insurance trust funds and the Federal Hospital Insurance and Federal Supplementary Medical Insurance trust funds are exhausted. Outlays for Medicaid, the Children’s Health Insurance Program (CHIP), and federal exchange subsidies are based on CBO’s most recent 10-year baseline in both of the simulations. Thereafter, growth in spending in our Baseline Extended simulations is consistent with CBO’s most recent long- term assumptions for the number and age composition of enrollees and the Medicare Trustees’ intermediate assumptions for excess cost growth. The excess cost growth assumption in our January 2010 Alternative simulation is also consistent with the Trustees’ intermediate assumptions. In our Fall 2010 Alternative simulation, excess cost growth is consistent with OACT’s alternative scenario. We regularly update these simulations as new data from CBO, the Trustees, and OACT become available. In recent years, we have updated our simulations twice a year: in the spring and in the fall. With each update, we also revisit the assumptions used in our model and update them to reflect legislative or technical changes, as needed. For example, after the enactment of the Patient Protection and Affordable Care Act (PPACA), consistent with CBO, we included federal spending for CHIP and federal exchange subsidies in the same category with Medicaid. To facilitate comparisons between different sets of simulations over time, we made technical changes to our January 2010 and Fall 2010 simulations. The key changes are described below. 1. GDP in our January 2010 and Fall 2010 simulations was originally determined by growth in the labor force, capital stock, and total factor productivity after the first 10 years, and projections of Social Security spending were adjusted accordingly. Beginning with the Fall 2011 update, our GDP growth assumption was changed to match the Trustees’ intermediate assumptions over the long term. This GDP growth assumption is more consistent with the growth in labor force, wages, and other factors underlying the Trustees’ Social Security and Medicare projections used in our simulations. We revised the GDP assumption in our January 2010 and Fall 2010 simulations to be consistent with this approach. Specifically, in this report, real GDP growth in the January 2010 and Fall 2010 simulations is based on the Trustees’ 2009 and 2010 reports, respectively, and averages 2.1 percent over the long term in both sets of simulations. This is up from the average of 1.9 percent in our original January 2010 simulations and the same average real GDP growth in the original Fall 2010 simulations. 2. Prior to our Fall 2011 update, we adjusted the Trustees’ intermediate projections for Social Security spending in our simulations to reflect wage growth implied in our simulations. After we made the change to our GDP assumption described above, it was no longer necessary to make adjustments to the Trustees’ Social Security projections. Therefore, in this report, Social Security spending in the January 2010 and Fall 2010 simulations is based on the Trustees’ 2009 and 2010 intermediate projections, respectively, without any additional adjustments. 3. In prior updates, our excess cost growth assumption, while based on growth for the U.S. health sector as a whole, was affected by productivity adjustments and other cost-containment mechanisms for Medicare. Beginning with our Fall 2012 update, we removed the effects of productivity adjustments and other cost-containment mechanisms for Medicare from our estimates of excess cost growth for Medicaid, CHIP, and exchange subsidies. We made similar changes to excess cost growth in our January 2010 and Fall 2010 simulations. In the revised January 2010 and Fall 2010 simulations used in this report, excess cost growth for Medicaid, CHIP, and exchange subsidies averages 0.8 percentage points per year over the long term in both the Baseline Extended and Alternative simulations. This is roughly the same as the assumption used in the original January 2010 Baseline Extended and Alternative simulations and the original Fall 2010 Alternative simulation and a small increase from the assumption used in the original Fall 2010 Baseline Extended simulation that averaged 0.7 percentage points per year. Together these changes reduced the deficit by 0.9 percentage points of GDP in 2050 in our January 2010 Baseline Extended simulation and 1.3 percentage points in our January 2010 Alternative simulation. They also reduced the deficit by 0.2 percentage points of GDP in 2050 in our Fall 2010 Baseline Extended simulation. These changes did not affect the size of the deficit in 2050 in our Fall 2010 Alternative simulation. Because of these changes, the assumptions and results of the simulations in this report differ slightly from those originally published in 2010. Tables 1 and 2 list the key budget assumptions underlying the January 2010 and Fall 2010 Baseline Extended and Alternative simulations used in this report. Through 2020, GDP grows at the rates underlying CBO’s most recent baseline estimates at the time the simulations were run. Thereafter, we follow the intermediate estimates from the most recent Trustees’ report at the time the simulations were run. These estimates are consistent with the growth in labor force, wages, and other factors underlying the estimates for Social Security and Medicare spending in our simulations. GDP is held constant across simulations and does not respond to changes in fiscal policy. The interest rate on federal debt is held constant even when deficits climb and the national saving rate plummets. Under such conditions, there could be a rise in the rate of interest and a more rapid increase in federal interest payments than our simulations display. Sensitivity analyses reveal that variations in these assumptions generally would not affect the relative outcomes of alternative policies. The key economic assumptions in the simulations in this report are shown in table 3. Overall, the federal government’s long-term fiscal outlook has improved since 2010 based in part on laws enacted after PPACA, including the Budget Control Act of 2011. (See fig. 10.) The provisions of the Budget Control Act primarily affected discretionary spending, and under both of our simulations, discretionary spending as a share of the economy would be lower in 2022 than at any point in the last 50 years. The Budget Control Act’s automatic enforcement procedures would reduce Medicare spending by up to 2 percent under current law. Many other mandatory programs, including Medicaid, are exempt from the spending reductions. Our Fall 2012 simulations show that health care spending remains a key driver of the federal government’s long-term fiscal imbalance. Under the Fall 2012 Alternative simulation, spending for Medicare, Medicaid, CHIP, and federal exchange subsidies almost doubles as a share of GDP by 2035. The results of our more recent Fall 2012 simulations for spending on Medicaid, CHIP and exchange subsidies do not differ significantly from the results from the Fall 2010 simulations that were run not long after the enactment of PPACA. Our most recent simulations published in Fall 2012 incorporate CBO’s and the Joint Committee on Taxation’s revised estimates through 2022 for the coverage provisions following the Supreme Court’s ruling. Spending on Medicaid, CHIP, and federal exchange subsidies is not significantly different from those in our Fall 2010 simulations in part because the reduction in federal matching funds associated with covering fewer individuals in state Medicaid programs is partially offset by increased costs of the federal exchange subsidies as a result of larger numbers of low-income individuals enrolling in exchange plans. Medicare spending is slightly higher in our most recent Baseline Extended simulation due to technical refinements the Medicare Trustees made in response to recommendations by the 2010-2011 Technical Review Panel on the Medicare Trustees Report; they were not directly related to PPACA. In our most recent Alternative simulation, Medicare spending is slightly lower than it was in our Fall 2010 Alternative simulation due to a change in the OACT’s assumption for physician payment updates in their alternative projections; this change was also unrelated to PPACA. The key budget assumptions in our Fall 2012 simulations are shown in table 4. The key economic assumptions in our Fall 2012 simulations are shown in table 5. In addition to the contacts named above, Melissa Wolf (Assistant Director), Andrew Johnson, Richard Krashevski, Thomas McCabe, Thomas McCool, Michael O’Neill, Albert Sim, and Phyllis Thorburn made key contributions to this report. Robert Robinson assisted with the graphics.
GAO regularly prepares long-term federal budget simulations under different assumptions about broad fiscal policy decisions. GAO's Baseline Extended simulation illustrates the long-term outlook assuming current law at the time the simulation was run is generally continued, while the Alternative simulation illustrates the long-term outlook assuming historical trends and past policy preferences continue. Under either set of assumptions, these simulations show that the federal budget is on an unsustainable fiscal path driven on the spending side by rising health care costs and the aging of the population. PPACA provides for expanded eligibility for Medicaid and federal subsidies to help individuals obtain private health insurance and includes provisions designed to slow the growth of federal health care spending. GAO was asked to describe the longterm effects of PPACA on the federal fiscal outlook under both its Baseline Extended and Alternative simulations; how changes in assumptions for federal health care cost growth might affect the outlook; and the key drivers of health care cost growth and how the uncertainty associated with each may influence future health care spending. To do this, GAO compared the results of its long-term fiscal simulations from before and after the enactment of PPACA and examined the key factors that contributed to changes in revenue and spending components; reviewed trends in health care cost growth and performed a sensitivity analysis varying rates of excess cost growth; and reviewed literature describing key drivers of health care cost growth and areas of uncertainty related to projections of federal health care costs. The effect of the Patient Protection and Affordable Care Act (PPACA), enacted in March 2010, on the long-term fiscal outlook depends largely on whether elements in PPACA designed to control cost growth are sustained. There was notable improvement in the longer-term outlook after the enactment of PPACA under GAO's Fall 2010 Baseline Extended simulation, which assumes both the expansion of health care coverage and the full implementation and effectiveness of the cost-containment provisions over the entire 75-year simulation period. However, the federal budget remains on an unsustainable path. Further, questions about the implementation and sustainability of these provisions have been raised by the Centers for Medicare & Medicaid Services' Office of the Actuary and others, due in part to challenges in sustaining increased health care productivity. The Fall 2010 Alternative simulation assumed cost containment mechanisms specified in PPACA were phased out over time while the additional costs associated with expanding federal health care coverage remained. Under these assumptions, the long-term outlook worsened slightly compared to the pre-PPACA January 2010 simulation. Federal health care spending is expected to continue growing faster than the economy. In the near term, this is driven by increasing enrollment in federal health care programs due to the aging of the population and expanded eligibility. Over the longer term, excess cost growth (the extent to which growth of health care spending per capita exceeds growth of income per capita) is a key driver. Slowing the rate of health care cost growth would help put the budget on a more sustainable path. There is general agreement that technological advancement has been the key factor in health care cost growth in the past, along with the effects of expanding health insurance coverage and increasing income, but there is considerable uncertainty about the magnitude of the impact that the different factors will have on future health care cost growth.The effect of the Patient Protection and Affordable Care Act (PPACA), enacted in March 2010, on the long-term fiscal outlook depends largely on whether elements in PPACA designed to control cost growth are sustained. There was notable improvement in the longer-term outlook after the enactment of PPACA under GAO's Fall 2010 Baseline Extended simulation, which assumes both the expansion of health care coverage and the full implementation and effectiveness of the cost-containment provisions over the entire 75-year simulation period. However, the federal budget remains on an unsustainable path. Further, questions about the implementation and sustainability of these provisions have been raised by the Centers for Medicare & Medicaid Services' Office of the Actuary and others, due in part to challenges in sustaining increased health care productivity. The Fall 2010 Alternative simulation assumed cost containment mechanisms specified in PPACA were phased out over time while the additional costs associated with expanding federal health care coverage remained. Under these assumptions, the long-term outlook worsened slightly compared to the pre-PPACA January 2010 simulation. Federal health care spending is expected to continue growing faster than the economy. In the near term, this is driven by increasing enrollment in federal health care programs due to the aging of the population and expanded eligibility. Over the longer term, excess cost growth (the extent to which growth of health care spending per capita exceeds growth of income per capita) is a key driver. Slowing the rate of health care cost growth would help put the budget on a more sustainable path. There is general agreement that technological advancement has been the key factor in health care cost growth in the past, along with the effects of expanding health insurance coverage and increasing income, but there is considerable uncertainty about the magnitude of the impact that the different factors will have on future health care cost growth.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. VA is the second largest federal department and, in addition to its central office located in Washington, D.C., has field offices throughout the United States, as well as the U.S. territories and the Philippines. The department’s three major components—the Veterans Health Administration (VHA), the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA)—are primarily responsible for carrying out its mission. More specifically, VHA provides health care services, including primary care and specialized care, and it performs research and development to improve veterans’ needs. VBA provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance. Further, NCA provides burial and memorial benefits to veterans and their families. Collectively, the three components rely on approximately 340,000 employees to provide services and benefits. These employees work in VA’s Washington, D.C. headquarters, as well as 167 medical centers, approximately 800 community-based outpatient clinics, 300 veterans centers, 56 regional offices, and 131 national and 90 state or tribal cemeteries situated throughout the nation. The use of IT is critically important to VA’s efforts to provide benefits and services to veterans. As such, the department operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers, veteran- facing systems, benefits delivery systems, memorial services, and all other systems supporting the department’s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure the delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. Toward this end, the department operates approximately 240 information systems, manages approximately 314,000 desktop computers and 30,000 laptops, and administers nearly 460,000 network user accounts for employees and contractors to facilitate providing benefits and health care to veterans. These systems are used for the determination of benefits, benefits claims processing, patient admission to hospitals and clinics, and access to health records, among other services. VHA’s systems provide capabilities to establish and maintain electronic health records that health care providers and other clinical staff use to view patient information in inpatient, outpatient, and long-term care settings. The department’s health information system— the Veterans Health Information Systems and Technology Architecture (VistA)—serves an essential role in helping the department to fulfill its health care delivery mission. Specifically, VistA is an integrated medical information system that was developed in-house by the department’s clinicians and IT personnel, and has been in operation since the early 1980s. The system consists of 104 separate computer applications, including 56 health provider applications; 19 management and financial applications; 8 registration, enrollment, and eligibility applications; 5 health data applications; and 3 information and education applications. Within VistA, an application called the Computerized Patient Record System enables the department to create and manage an individual electronic health record for each VA patient. VBA relies on VBMS to collect and store information such as military service records, medical examinations, and treatment records from VA, DOD, and private medical service providers. In 2014, VA issued its 6-year strategic plan, which emphasizes the department’s goal of increasing veterans’ access to benefits and services, eliminating the disability claims backlog, and ending veteran homelessness. According to the plan, the department intends to improve access to benefits and services through the use of enhanced technology to provide veterans with access to more effective care management. The plan also calls for VA to eliminate the disability claims backlog by fully implementing an electronic claims process that is intended to reduce processing time and increase accuracy. Further, the department has an initiative under way that provides services, such as health care, housing assistance, and job training, to end veteran homelessness. Toward this end, VA is working with other agencies, such as the Department of Health and Human Services, to implement more coordinated data entry systems to streamline and facilitate access to appropriate housing and services. VA reported spending about $3.9 billion to improve and maintain its IT resources in fiscal year 2015. Specifically, the department reported spending approximately $548 million on new systems development efforts, approximately $2.3 billion on maintaining existing systems, and approximately $1 billion on payroll and administration. For fiscal year 2016, the department received appropriations of about $4.1 billion for IT—about $505 million on new systems development, about $2.5 billion on maintaining existing systems, and about $1.1 billion on payroll and administration. For fiscal year 2017, the department’s budget request included nearly $4.3 billion for IT. The department requested approximately $471 million for new systems development efforts, approximately $2.5 billion for maintaining existing systems, and approximately $1.3 billion for payroll and administration. In addition, in its 2017 budget submission, the department requested appropriations to make improvements in a number of areas, including: veterans’ access to health care, to include enhancing health care- related systems, standardizing immunization data, and expanding telehealth services ($186.7 million); veterans’ access to benefits by modernizing systems supporting benefits delivery, such as VBMS and the Veterans Services Network ($236.3 million); veterans’ experiences with VA by focusing on integrated service delivery and streamlined identification processes ($171.3 million); VA employees’ experiences by enhancing internal IT systems ($13 information security, including implementing strong authentication, ensuring repeatable processes and procedures, adopting modern technology, and enhancing the detection of cyber vulnerabilities and protection from cyber threats ($370.1 million). Electronic health records are particularly crucial for optimizing the health care provided to veterans, many of whom may have health records residing at multiple medical facilities within and outside the United States. Taking steps toward interoperability—that is, collecting, storing, retrieving, and transferring veterans’ health records electronically—is significant to improving the quality and efficiency of care. One of the goals of interoperability is to ensure that patients’ electronic health information is available from provider to provider, regardless of where it originated or resides. Since 1998, VA has undertaken a patchwork of initiatives with DOD to allow the departments’ health information systems to exchange information and increase interoperability. Among others, these have included initiatives to share viewable data in the two departments’ existing (legacy) systems, link and share computable data between the departments’ updated heath data repositories, and jointly develop a single integrated system that would be used by both departments. Table 1 summarizes a number of these key initiatives. In addition to the initiatives mentioned in table 1, VA has worked in conjunction with DOD to respond to provisions in the National Defense Authorization Act for Fiscal Year 2008. This act required the departments to jointly develop and implement fully interoperable electronic health record systems or capabilities in 2009. Yet, even as the departments undertook numerous interoperability and modernization initiatives, they faced significant challenges and slow progress. We have reported, for example, that the two departments’ success in identifying and implementing joint IT solutions has been hindered by an inability to articulate explicit plans, goals, and time frames for meeting their common health IT needs. In March 2011, the secretaries of VA and DOD announced that they would develop a new, joint integrated electronic health record system (referred to as iEHR). This was intended to replace the departments’ separate systems with a single common system, thus, sidestepping many of the challenges they had previously encountered in trying to achieve interoperability. However, in February 2013, about 2 years after initiating iEHR, the secretaries announced that the departments were abandoning plans to develop a joint system, due to concerns about the program’s cost, schedule, and ability to meet deadlines. The Interagency Program Office (IPO), put in place to be accountable for VA’s and DOD’s efforts to achieve interoperability, reported spending about $564 million on iEHR between October 2011 and June 2013. Following the termination of the iEHR initiative, VA and DOD moved forward with plans to separately modernize their respective electronic health record systems. In light of VA and DOD not implementing a solution that allowed for the seamless electronic sharing of health care data, the National Defense Authorization Act for Fiscal Year 2014 included requirements pertaining to the implementation, design, and planning for interoperability between the departments’ electronic health record systems. Among other actions, provisions in the act directed each department to (1) ensure that all health care data contained in their systems (VA’s VistA and DOD’s Armed Forces Health Longitudinal Technology Application, referred to as AHLTA) complied with national standards and were computable in real time by October 1, 2014; and (2) deploy modernized electronic health record software to support clinicians while ensuring full standards-based interoperability by December 31, 2016. In August 2015, we reported that VA, in conjunction with DOD, had engaged in several near-term efforts focused on expanding interoperability between their existing electronic health record systems. For example, the departments had analyzed data related to 25 “domains” identified by the Interagency Clinical Informatics Board and mapped health data in their existing systems to standards identified by the IPO. The departments also had expanded the functionality of their Joint Legacy Viewer—a tool that allows clinicians to view certain health care data from both departments. More recently, in April 2016, VA and DOD certified that all health care data in their systems complied with national standards and were computable in real time. However, VA acknowledged that it did not expect to complete a number of key activities related to its electronic health record system until sometime after the December 31, 2016, statutory deadline for deploying modernized electronic health record software with interoperability. Specifically, the department stated that deployment of a modernized VistA system at all locations and for all users is not planned until 2018. VA’s recently departed Chief Information Officer (CIO) initiated an effort to transform the focus and functions of the Office of Information and Technology (OI&T), which is responsible for providing IT services across VA and managing the department’s IT assets and resources. The CIO’s transformation strategy, initiated in January 2016, called for OI&T to focus on stabilizing and streamlining processes, mitigating weaknesses highlighted in GAO assessments, and improving outcomes by institutionalizing a new set of IT management capabilities. As part of this transformation, the CIO began transitioning the oversight of and accountability for IT projects to a new project management process called the Veteran-focused Integration Process in January 2016, in an effort to streamline systems development and the delivery of new IT capabilities. The CIO established five new functions within OI&T: The enterprise program management office is to serve as OI&T’s portfolio management and project tracking organization. The account management function is to be responsible for managing the IT needs of VA’s major components. The quality and compliance function is to be responsible for establishing policy governance and standards and ensuring adherence to them. The data management organization is expected to improve both service delivery and the veteran experience by engaging with data stewards to ensure the accuracy and security of the information collected by VA. The strategic sourcing function is to be responsible for establishing an approach to fulfilling the department’s requirements with vendors that provide solutions for those requirements, managing vendor selection, tracking vendor performance and contract deliverables, and sharing insights on new technologies and capabilities to improve the workforce knowledge base. According to the former CIO, the transformation strategy was completed in the first quarter of fiscal year 2017. Recognizing the importance of reforming the government-wide management of IT, Federal Information Technology Acquisition Reform provisions (commonly referred to as FITARA) were enacted in December 2014 as part of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015. The law was intended to improve covered agencies’ acquisitions of IT and further enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas, including data center consolidation. Under FITARA, VA and other covered agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. FITARA also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance includes requirements for covered agencies such as VA to: maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets due to consolidation and optimization for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward meeting optimization metrics on a quarterly basis. The guidance also directs each covered agency to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO stating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. Although VA has proceeded with its program to modernize VistA (known as VistA Evolution), the department’s long-term plan for meeting its electronic health record system needs beyond fiscal year 2018 is uncertain. The department’s current VistA modernization approach is reflected in an interoperability plan and a roadmap describing functional capabilities to be deployed through fiscal year 2018. Specifically, these documents describe the department’s approach for modernizing its existing electronic health record system through the VistA Evolution program, while helping to facilitate interoperability with DOD’s system and the private sector. For example, the VA Interoperability Plan, issued in June 2014, describes activities intended to improve VistA’s technical interoperability, such as standardizing the VistA software across the department to simplify sharing data. In addition, the VistA 4 Roadmap, which further describes VA’s plan for modernizing the system, identifies four sets of functional capabilities that are expected to be incrementally deployed during fiscal years 2014 through 2018 to modernize the VistA system and enhance interoperability. According to the roadmap, the first set of capabilities was delivered by the end of September 2014 and included access to the Joint Legacy Viewer and a foundation for future functionality, such as an enhanced graphical user interface. Another interoperable capability that is expected to be incrementally delivered over the course of the VistA modernization program is the enterprise health management platform. The department has stated that this platform is expected to provide clinicians with a customizable view of a health record that can integrate data from VA, DOD, and third-party providers. Also, when fully deployed, VA expects the enterprise health management platform to replace the Joint Legacy Viewer. However, an independent assessment of health IT at VA questioned whether the VistA Evolution program to modernize the electronic health record system can overcome a variety of risks and technical issues that have plagued prior VA initiatives of similar size and complexity. For example, the study raised questions regarding the lack of any clear advances made during the past decade and the increasing amount of time needed for VA to release new health IT capabilities. Given the concerns identified, the study recommended that VA assess the cost versus benefits of various alternatives for delivering the modernized capabilities, such as commercially available off-the-shelf electronic health record systems, open source systems, and the continued development of VistA. In speaking about this matter, VA’s former Under Secretary for Health asserted that the department will follow through on its plans to complete the VistA Evolution program in fiscal year 2018. However, the former CIO also indicated that the department would reconsider how best to meet its electronic health record system needs beyond fiscal year 2018. As such, VA’s approach to addressing its electronic health record system needs remains uncertain. Beyond modernizing VistA, VA has undertaken numerous initiatives with DOD that were intended to advance electronic health record interoperability between the two departments. Yet, a significant concern is that these departments have not identified outcome-oriented goals and metrics to clearly define what they aim to achieve from their interoperability efforts, and the value and benefits these efforts are expected to yield. As we have stressed in our prior work and guidance, assessing the performance of a program should include measuring its outcomes in terms of the results of products or services. In this case, such outcomes could include improvements in the quality of health care or clinician satisfaction. Establishing outcome-oriented goals and metrics is essential to determining whether a program is delivering value. The IPO is responsible for monitoring and reporting on VA’s and DOD’s progress in achieving interoperability and coordinating with the departments to ensure that these efforts enhance health care services. Toward this end, the office issued guidance that identified a variety of process-oriented metrics to be tracked, such as the percentage of health data domains that have been mapped to national standards. The guidance also identified metrics to be reported that relate to tracking the amounts of certain types of data being exchanged between the departments, using existing capabilities. This would include, for example, laboratory reports transferred from DOD to VA via the Federal Health Information Exchange and patient queries submitted by providers through the Bidirectional Health Information Exchange. Nevertheless, in our August 2015 report, we noted that the IPO had not specified outcome-oriented metrics and goals that could be used to gauge the impact of the interoperable health record capabilities on the departments’ health care services. At that time, the acting director of the IPO stated that the office was working to identify metrics that would be more meaningful, such as metrics on the quality of a user’s experience or on improvements in health outcomes. However, the office had not established a time frame for completing the outcome-oriented metrics and incorporating them into the office’s guidance. In the report, we stressed that using an effective outcome-based approach could provide the two departments with a more accurate picture of their progress toward achieving interoperability, and the value and benefits generated. Accordingly, we recommended that the departments, working with the IPO, establish a time frame for identifying outcome- oriented metrics; define related goals as a basis for determining the extent to which the departments’ modernized electronic health record systems are achieving interoperability; and update IPO guidance accordingly. Both departments concurred with our recommendations. Further, since that time, VA has established a performance architecture program that has begun to define an approach for identifying outcome-oriented metrics focused on health outcomes in selected clinical areas, and it also has begun to establish baseline measurements. We intend to continue monitoring the departments’ efforts to determine how these metrics define and measure the results achieved by interoperability between the departments. VA has moved forward with modernizing VistA despite concerns that doing so is potentially duplicative with DOD’s acquisition of a commercially available electronic health record system. Specifically, VA took this course of action even though it has many health care business needs in common with DOD. For example, in May 2010, both departments issued a report on medical IT to congressional committees that identified 10 areas—inpatient documentation, outpatient documentation, pharmacy, laboratory, order entry and management, scheduling, imaging and radiology, third-party billing, registration, and data sharing—in which the departments have common business needs. Further, the results of a 2008 consultant’s study pointed out that over 97 percent of inpatient requirements for electronic health record systems are common to both departments. We also issued several prior reports regarding the plans for separate systems, in which we noted that the two departments did not substantiate their claims that VA’s VistA modernization, together with DOD’s acquisition of a new system, would be achieved faster and at less cost than developing a single, joint electronic health record system. Moreover, we noted that the departments’ plans to modernize their two separate systems were duplicative and stressed that their decisions to do so should be justified by comparing the costs and schedules of alternate approaches. We recommended that VA and DOD develop cost and schedule estimates that would include all elements of their approach (i.e., to modernize both departments’ health information systems and establish interoperability between them) and compare them with estimates of the cost and schedule for developing a single, integrated system. If the planned approach for separate systems was projected to cost more or take longer, we recommended that the departments provide a rationale for pursuing such an approach. VA, as well as DOD, agreed with our recommendations and stated that an initial comparison had indicated that the approach involving separate systems would be more cost effective. However, as of January 2017, the departments had not provided us with a comparison of the estimated costs of their current and previous approaches. Further, with respect to their assertions that separate systems could be achieved faster, both departments had developed schedules which indicated that their separate modernization efforts are not expected to be completed until after the 2017 planned completion date for the previous single-system approach. In February 2015, we designated VA health care as a high-risk area. Among the five broad areas contributing to our determination was the department’s IT challenges. Of particular concern was the failed modernization of a system to support the department’s outpatient appointment scheduling. We have previously reported on the department’s outpatient appointment scheduling system, which is about 30 years old. Among the problems that VA employees responsible for scheduling appointments have cited, are that the system’s commands require the use of many keystrokes, and that it does not allow them to view multiple screens at once. Thus, schedulers must open and close multiple screens to check a provider’s or a clinic’s full availability when setting up a medical appointment, which is time- consuming and can lead to errors. In May 2010, we reported that, after spending an estimated $127 million over 9 years on its outpatient scheduling system modernization project, VA had not implemented any of the planned system’s capabilities and was essentially starting over by beginning a new initiative to build or purchase another scheduling system. We also noted that VA had not developed a project plan or schedule for the new initiative, stating that it intended to do so after determining whether to build or purchase the new system. We recommended that the department take six actions to improve key systems development and acquisition processes essential to the second outpatient scheduling system effort. The department generally concurred with our recommendations, but as of May 2016, had not addressed four of the six recommendations. Addressing our recommendations should better position VA to effectively modernize its outpatient scheduling system, and ultimately, improve the quality of care that veterans receive. In September 2015, we reported that VBA had made progress in developing and implementing VBMS, its system that is to be used for processing disability benefit claims. Specifically, it had deployed the initial version of the system to all of its regional offices as of June 2013. Further, after initial deployment, VBA continued developing and implementing additional system functionality and enhancements to support the electronic processing of disability compensation claims. As a result, 95 percent of records related to veterans’ disability claims were electronic and resided in the system. Nevertheless, we found that VBMS was not able to fully support disability and pension claims, as well as appeals processing. While the Under Secretary for Benefits stated in March 2013 that the development of the system was expected to be completed in 2015, implementation of functionality to fully support electronic claims processing was delayed beyond 2015. In addition, VBA had not produced a plan that identified when the system would be completed. Accordingly, holding VBA management accountable for meeting a time frame and demonstrating progress was difficult. Our report further noted that, even as VBA continued its efforts to complete the development and implementation of VBMS, three areas were in need of increased management attention. Cost estimating: The program office did not have a reliable estimate of the cost for completing the system. Without such an estimate, VBA management and the department’s stakeholders had a limited view of the system’s future resource needs, and the program risked not having sufficient funding to complete development and implementation of the system. System availability: Although VBA had improved its performance regarding system availability to users, it had not established system response time goals. Without such goals, users did not have an expectation of the system response times they could anticipate and management did not have an indication of how well the system performed relative to performance goals. System defects: While the program had actively managed system defects, a recent system release had included unresolved defects that impacted system performance and users’ experiences. Continuing to deploy releases with large numbers of defects that reduced system functionality could have adversely affected users’ ability to process disability claims in an efficient manner. We also noted in the report that VBA had not conducted a customer satisfaction survey that would allow the department to compile data on how users viewed the system’s performance, and ultimately, to develop goals for improving the system. Our survey of VBMS users in 2014 found that a majority of them were satisfied with the system, but that decision review officers were considerably less satisfied. However, while the results of our survey provided VBA with data about users’ satisfaction with the system, the absence of user satisfaction goals limited the utility of the survey results. Specifically, without having established goals to define user satisfaction, VBA did not have a basis for gauging the success of its efforts to promote satisfaction with the system, or for identifying areas where its efforts to complete development and implementation of the system might need attention. We recommended, among other actions, that the department develop a plan with a time frame and a reliable cost estimate for completing VBMS, establish goals for system response time, assess user satisfaction, and establish satisfaction goals to promote improvement. While all of our recommendations currently remain open, the department indicated that it has begun taking steps to address them. For example, the department informed us of its plans to distribute its own survey to measure users’ satisfaction with VBMS and to have the results of this survey analyzed by May 2017. In addition, the department has developed draft metrics for measuring the performance of the most commonly executed transactions within VBMS. Continued attention to these important areas can improve VA’s efforts to effectively complete the development and implementation of VBMS and, in turn, more effectively support the department’s processing of disability benefit claims. We previously reported that VA was among the agencies that had collectively made progress on their data center closure efforts; nevertheless, it had fallen short of OMB’s goal for agencies to close 40 percent of all non-core centers by the end of fiscal year 2015. VA’s progress toward closing data centers, and realizing the associated cost savings, lagged behind that of most other covered agencies. Specifically, we reported that VA’s closure of 20 out of its total of 356 data centers gave the department a 6 percent closure rate through fiscal year 2015—ranking its closure rate 19th lowest out of the 24 agencies we studied. Further, when we took into account the data centers that the department planned to close through fiscal year 2019, VA’s 8 percent closure rate ranked 21st lowest out of 24. With regard to cost savings and avoidance resulting from data center consolidation, our analysis of the department’s data identified a total of $19.1 million in reported cost savings or avoidances from fiscal year 2011 though fiscal year 2015. This equated to only about 0.7 percent of the total of approximately $2.8 billion that all 24 agencies reported saving or avoiding during the same time period. Also, when we reported on this matter in March 2016, the department had not yet estimated any planned cost savings or avoidances from further data center consolidation during fiscal years 2017 through 2019. VA also lagged behind other agencies in making progress toward addressing data center optimization metrics established by OMB in 2014. These metrics, which applied only to core data centers, addressed several data center optimization areas, including cost per operating system, energy, facility, labor, storage, and virtualization. Further, OMB established a target value for nine metrics that agencies were expected to achieve by the end of fiscal year 2015. As we previously reported, 20 of 22 agencies with core data centers met at least one of OMB’s optimization targets. VA was the only agency that reported meeting none of the nine targets. Accordingly, we recommended that VA take action to improve its progress in the data center optimization areas that we reported as not meeting OMB’s established targets. The department agreed with our recommendation and has since stated that approximately 70 data centers have been tentatively identified for potential consolidation by the end of fiscal year 2019. VA is anticipating that, upon completion, these consolidations will improve its performance on OMB’s optimization metrics. The federal government spent more than 75 percent of the total amount budgeted for IT for fiscal year 2015 on operations and maintenance, including for the use of legacy IT systems that are becoming increasingly obsolete. VA is among a handful of departments with one or more archaic legacy systems. Specifically, our recent report on legacy systems used by federal agencies identified 2 of the department’s systems as being over 50 years old, and among the 10 oldest investments and/or systems that were reported by 12 selected agencies. Personnel and Accounting Integrated Data (PAID)—This 53-year old system automates time and attendance for employees, timekeepers, payroll, and supervisors. It is written in Common Business Oriented Language (COBOL), a programming language developed in the late 1950s and early 1960s, and runs on IBM mainframes. VA plans to replace this system with the Human Resources Information System Shared Service Center in 2017. Benefits Delivery Network (BDN)—This 51-year old system tracks claims filed by veterans for benefits, eligibility, and dates of death. It is a suite of COBOL mainframe applications. VA has general plans to roll the capabilities of BDN into another system, but has not established a firm date doing so. Ongoing use of antiquated systems such as PAID and BDN contributes to agencies spending a large, and increasing, proportion of their IT budgets on operations and maintenance of systems that have outlived their effectiveness and are consuming resources that outweigh their benefits. Accordingly, we recommended that VA identify and plan to modernize or replace its legacy systems. VA concurred with our recommendation and stated that it plans to retire PAID in 2017 and to retire BDN in 2018. In conclusion, effective IT management is critical to the performance of VA’s mission. However, the department faces challenges in several key areas, including its approach to pursuing electronic health record interoperability with DOD. Specifically, VA’s reconsideration of its approach to modernizing VistA raises uncertainty about how it intends to accomplish this important endeavor. VA has not yet defined the extent of interoperability it needs to provide the highest possible quality of care to its patients, as well as how and when the department intends to achieve this extent of interoperability with DOD. Further, VA has not justified the development and operation of an electronic health record system that is separate from DOD’s system, even though the departments have common system needs. The department also faces challenges in modernizing its approximately 30-year old outpatient appointment scheduling system and improving its development and implementation of VBMS. Further, the department has not yet demonstrated expected progress toward consolidating and optimizing the performance of its data centers. In addition, VA’s continued operation of two of the oldest legacy IT systems in the federal government raises concern about the extent to which the department continues to spend funds on IT systems that are no longer effective or cost beneficial. While we recognize that VA has initiated steps to mitigate the IT management weaknesses we have identified, sustained management attention and organizational commitment will be essential to ensuring that the transformation is successful and that the weaknesses are fully addressed. Chairman Roe, Ranking Member Walz, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you or your staffs have any questions about this testimony, please contact David A. Powner at (202) 512-9286 or [email protected] . Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Mark Bird (Assistant Director), Eric Trout (Analyst in Charge), Rebecca Eyler, Scott Pettis, Priscilla Smith, and Christy Tyson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The use of IT is crucial to helping VA effectively serve the nation's veterans, and each year, the department spends over $4 billion on IT. However, over many years, VA has had difficulty managing its information systems, raising questions about the effectiveness of its operations and its ability to deliver intended outcomes needed to help advance the department's mission. GAO has previously reported on a number of the department's IT initiatives. This statement summarizes results from key GAO reports related to increasing electronic health record interoperability between VA and DOD; system challenges that have contributed to GAO's designation of VA health care as a high-risk area; and VA's development of its system for processing disability benefits, data center consolidation, and legacy systems. GAO noted in July 2016 that the Department of Veterans Affairs (VA) had moved forward with an effort to modernize its health information system—the Veterans Health Information Systems and Technology Architecture (VistA)—but that the department is uncertain of its long-term plan for addressing its electronic health record system needs beyond fiscal year 2018. Beyond modernizing VistA, GAO reported in August 2015 that VA and the Department of Defense (DOD) had not identified outcome-oriented goals and metrics to clearly define what they aim to achieve from their efforts to increase electronic health record interoperability (i.e., the electronic exchange and use of health records) between the two departments. Moreover, VA has begun to modernize VistA separate from DOD's planned acquisition of a commercially available electronic health record system, even though both departments have many health care business needs in common. In 2014, GAO noted that the departments' decision to abandon the development of a single system in favor of modernizing their two separate systems was not justified and was identified as an example of duplication among government activities. The departments have not yet provided a comparison of the estimated costs and schedules of their current and previous approaches as GAO recommended. In February 2015, GAO designated VA health care as a high-risk area, with IT challenges being one contributing factor. Specifically, GAO noted that the outpatient appointment scheduling system, which is currently about 30 years old, is time-consuming to use and error prone. However, the project to modernize that system failed after VA spent an estimated $127 million over 9 years. VA has begun a new initiative to build or purchase another scheduling system. GAO reported in September 2015, that VA had made progress implementing the Veterans Benefits Management System (VBMS) for processing disability benefits. However, the department had not developed a time frame and a reliable cost estimate for completing VBMS. VA also had not established goals for system response time, and had not minimized incidences of high and medium severity system defects for future VBMS releases. Further, the department had not assessed user satisfaction, or established user satisfaction goals. In addition, VA's consolidation and closure of data centers has lagged behind other agencies, as GAO reported in March 2016. For example, VA's closure of 20 out of a total of 356 data centers gave the department a 6 percent closure rate through fiscal year 2015 that ranked 19th of the 24 agencies in GAO's study. Also, VA's reported $19.1 million in cost savings or avoidances from fiscal year 2011 through fiscal year 2015 was only about 0.7 percent of the total of about $2.8 billion that all 24 agencies reported saving. In addition, the department had not met any of the data center optimization areas established by the Office of Management and Budget. VA was identified in a May 2016 GAO report as using antiquated and expensive to maintain, legacy IT systems. At that time, the 53-year-old Personnel and Accounting Integrated Data (PAID) system was slated to be replaced in 2017. Further, VA has plans to retire the 51-year-old Benefits Delivery Network, which tracks veterans' claims for benefits, eligibility, and death dates in 2018. GAO has made numerous recommendations to VA to improve the modernization of its IT systems. For example, GAO has recommended that VA develop goals and metrics for determining the extent to which its modernized electronic health record system is achieving interoperability with DOD's; address challenges associated with modernizing its scheduling system; address shortcomings with VBMS planning and implementation; take actions to improve progress in data center optimization; and modernize or replace obsolete legacy IT systems. VA agreed with these recommendations and said it has begun taking actions to implement them.
Beginning on April 27, 2005, DOD made TRICARE coverage available for purchase through TRS for certain reservists when they were not on active duty or eligible for pre- or postactivation TRICARE coverage. Enrollees in TRS can obtain care from MTFs or from TRICARE-authorized civilian providers or hospitals. TRS enrollees can obtain prescription drugs through TRICARE’s pharmacy system, which includes MTF pharmacies, network retail pharmacies, nonnetwork retail pharmacies, and the TRICARE Mail Order Pharmacy. Since 2005, Congress has made this benefit available to a growing number of members of the Selected Reserves. The NDAA for Fiscal Year 2005 authorized the TRS program. As originally authorized, TRS made TRICARE coverage available to certain members of the Selected Reserves—that is, reservists mobilized since September 11, 2001, who had continuous qualifying service on active duty for 90 days or more in support of a contingency operation. To qualify for TRS, reservists had to enter into an agreement with their respective reserve components to continue to serve in the Selected Reserves in exchange for TRS coverage. For each 90-day period of qualifying service in a contingency operation, reservists could purchase 1 year of TRS coverage. Electing to enroll in this TRS program was a onetime opportunity, and as originally authorized, the program required reservists to sign the new service agreement and register for TRS before leaving active duty service. Reservists who qualified could also obtain coverage for their dependents by paying the appropriate premium. The NDAA for Fiscal Year 2006 expanded the number of reservists and dependents who qualify to participate in the TRS program. Under the expanded program, which became effective on October 1, 2006, almost all reservists and dependents—regardless of the reservists’ prior active duty service—had the option of purchasing TRICARE coverage. Similar to the TRS program as it was originally authorized, members of the Selected Reserves and their dependents choosing to enroll in the expanded TRS program had to pay a monthly premium to receive TRICARE coverage. The portion of the premium paid by reservists in the Selected Reserves and their dependents for TRS coverage varied based on certain qualifying conditions that had to be met, such as whether the reservist also had access to an employer-sponsored health plan. The NDAA for Fiscal Year 2006 established three levels—which DOD calls tiers—of qualification for TRS, with enrollees paying different portions of the premium based on the tier for which they qualified. Those who would have qualified under the original TRS program, because they had qualifying service in support of a contingency operation, paid the lowest premium. In another change to the program, those reservists with qualifying service in support of a contingency operation now had up to 90 days after leaving active duty to sign the new service agreement required to qualify for this lowest premium tier. The NDAA for Fiscal Year 2007 significantly restructured the TRS program by eliminating the three-tiered premium structure. The act also changed TRS qualification criteria for members of the Selected Reserves, generally allowing these reservists to purchase TRICARE coverage for themselves and their dependents at the lowest premium—formerly paid by enrollees in tier 1—regardless of whether they have served on active duty in support of a contingency operation. In addition, the act removed the requirement that reservists sign service agreements to be qualified for TRS. Instead, the act established that reservists in the Selected Reserves qualify for TRS for the duration of their service in the Selected Reserves. DOD implemented these changes on October 1, 2007. See table 1 for an overview of TRS qualification criteria and the monthly portion of the TRS premiums paid by reservists. Currently, reservists who qualify for TRS may purchase TRS individual or family coverage at any time. Once enrolled in TRS, reservists and their dependents are able to obtain health care through MTFs, if appointments are available, or through TRICARE-authorized civilian providers or hospitals. Enrollees who choose to use civilian providers are subject to an annual deductible, co-payments, and coinsurance. When these enrollees use providers outside TRICARE’s civilian network, they pay higher cost shares and are considered to be using TRICARE Standard, the TRICARE option that is similar to a fee-for-service plan. When they use providers who are part of the TRICARE network, they pay discounted cost shares and are considered to be using TRICARE Extra, the TRICARE option that is similar to a preferred provider plan. DOD is required by law to set premiums for TRS at a level that it determines to be reasonable using an appropriate actuarial basis. DOD officials told us that the department interpreted this to mean that TRS premiums should be set equal to the expected average costs per plan of providing the benefit. Beginning in 2005, DOD based TRS premiums on the premiums for the BCBS Standard plan offered through FEHBP because, at the time DOD was developing TRS, actual data on the costs of delivering TRS benefits for the TRS population did not exist. To set the premiums, DOD compared characteristics of the beneficiary populations in each group and subsequently adjusted the BCBS premiums for differences in age, gender, and family size between the TRS and BCBS populations. The population that qualifies for TRS is younger, has a higher percentage of males, and has a larger number of dependents per sponsor than the BCBS population. Taken together, DOD concluded that these factors caused expected health care costs for the TRS population to be lower than expected health care costs for the BCBS population. To account for these differences, DOD set the TRS premium for individual coverage 32 percent lower than the corresponding BCBS premium and set the TRS premium for family coverage 8 percent lower than the corresponding BCBS premium. According to DOD officials, the department based TRS premiums on BCBS premiums, rather than another health insurance plan’s premiums, because BCBS offers coverage that is similar to the coverage offered under TRICARE Standard. (For a comparison of cost-sharing provisions under TRS and BCBS Standard, see table 2.) In addition, like TRS, BCBS charges a separate premium for individual coverage and for family coverage, and each of these premiums is uniform nationally and updated annually. Furthermore, according to DOD officials, basing TRS premiums on BCBS premiums allowed the department to account for the effect of adverse selection on the department’s costs, because adverse selection is already accounted for in BCBS premiums. In order to compensate for rising health care costs, DOD originally designed TRS premiums so that they would be adjusted each year based on annual adjustments in the total BCBS Standard premiums. DOD planned to continue using this method to adjust premiums in the immediate future but allowed for the possibility that it might change the methodology at some point in the future. Thus, if BCBS premiums increased by 8.5 percent from 2005 to 2006, TRS premiums would be increased by the same percentage. New premiums are effective at the start of each calendar year. TRS premiums were increased by 8.5 percent for 2006 and scheduled to be increased by 1 percent for 2007, but a provision in the NDAA for Fiscal Year 2007 prevented this increase from being implemented for 2007. According to DOD officials, another reason DOD decided to use BCBS as the basis for annual TRS premium adjustments was because BCBS premiums are updated annually, and the new premiums are made public each October. DOD officials told us they did not want to use DOD data to adjust premiums because they believe that doing so would be less transparent; that is, they wanted to avoid any appearance that the data might have been manipulated to DOD’s own financial advantage. In 2006, the premiums for both individual and family coverage under TRS exceeded the reported costs of providing TRICARE benefits through the program. The total premium for individual coverage under tier 1 was 72 percent higher than the average cost per plan of providing benefits through the program. Similarly, the total premium for family coverage under tier 1 was 45 percent higher than the average cost per plan of providing benefits. There are several reasons that basing TRS premiums on BCBS premiums did not successfully align TRS premiums with benefit costs. These include certain differences between the TRS and BCBS populations and certain differences between the two programs that DOD did not take into account. Experts indicated that data on the costs of delivering TRS benefits would provide DOD with an improved basis for adjusting premiums in future years. In 2006, the premium for both individual and family coverage under TRS exceeded the reported costs per plan of providing TRICARE benefits through the program. For tier 1, the annual premium for individual plans of $3,471—including the share paid by enrollees and the share covered by DOD—was 72 percent higher than the average cost of providing benefits through TRS of $2,020 per plan. Similarly, the annual premium for family plans of $10,843 was 45 percent higher than the average cost of providing benefits through TRS of $7,496 per plan. (See fig. 1.) The average costs per TRS plan do not include certain administrative costs that DOD was not able to allocate specifically to TRS, such as advertising costs and program education. However, DOD officials told us that including these costs would not be sufficient to close the gap between TRS premiums and the average costs per plan. DOD also incurred start-up costs associated with establishing the TRS program, which are not included in the average costs per TRS plan because DOD did not intend for them to be covered by TRS premiums. The discrepancy between TRS premiums and reported TRS costs has implications for DOD’s cost sharing with TRS enrollees. By statute, the portion of the TRS premium paid by enrollees in tier 1—and all enrollees as of October 1, 2007—is to cover 28 percent of the full premium. In 2006, TRS enrollees in tier 1 paid $972 for individual coverage and $3,036 for family coverage. This covered 48 percent of the average cost per individual plan and 41 percent of the average cost per family plan. Had DOD been successful in establishing TRS premiums that were equal to the average reported cost per TRS plan in 2006, enrollees’ share of the premium would have been $566 for single coverage and $2,099 for family coverage in that year. Basing TRS premiums on BCBS premiums is unlikely to align TRS premiums with benefit costs because of several differences between the TRS and BCBS populations and programs that DOD did not take into account. DOD based TRS premiums on BCBS premiums because at the time DOD was developing TRS, actual data on the costs of delivering TRS benefits to the TRS population did not exist. However, experts we interviewed suggested that because of demographic differences between the TRS and BCBS populations, BCBS-based premiums are unlikely to reflect TRS costs. In setting TRS premiums, DOD adjusted BCBS premiums to account for differences in age, gender, and family size between the TRS and BCBS populations. However, DOD did not take other demographic differences into account that could have potentially affected its likely success—such as enrollees’ geographic distribution and health status—because accounting for these differences is very difficult. The geographic distribution of a population is an important factor in predicting health care costs and corresponding health insurance premiums, in large part because physician payment rates vary across geographic locations. Furthermore, according to experts we interviewed, the most important predictors of health care costs are measures related to enrollees’ health status, which were not fully available to DOD when it first established TRS premiums. Another factor that may have contributed to the disparity between TRS premiums and the program’s costs is the dissimilarity in the structures of the TRS and BCBS programs. While TRS premiums are designed to cover enrollees’ health care costs and certain administrative costs, BCBS premiums are designed to cover these costs and also may include contributions to or withdrawals from plan reserves and profits. As a result, changes in BCBS premiums are generally not equal to changes in BCBS program costs. Experts indicated that data on the costs of delivering TRS benefits will provide DOD with an improved basis for adjusting premiums in future years. They informed us that there are several methods of setting health insurance premiums. The methods that are most successful in aligning premiums with the actual costs of providing benefits involve using program cost data when setting premiums. Although TRS cost data did not exist when the program was implemented, leading DOD to base TRS premiums on BCBS premiums, TRS cost data from 2005 and 2006 are now available. In DOD’s description of its methodology for establishing and adjusting TRS premiums in the Federal Register on March 16, 2005, DOD allowed for the possibility of using other means to adjust premiums in the future. It stated that it could base future changes in TRS premiums on actual cost data. However, DOD officials told us that the department has not used these data to adjust TRS premiums due to the limitations associated with using prior year costs to predict future costs. According to DOD officials, prior year claims data may not be indicative of future year claims costs due to the newness of the TRS program, recent changes to TRS, and the low number of enrollees. However, TRS cost data reflect actual experience with the program and any limitations associated with TRS cost data should decrease over time as DOD gains more experience with the program and more reservists enroll in it. Nonetheless, due to the uncertainty associated with predicting future health care costs, premiums are unlikely to exactly match program costs, even when they are based on cost data from prior years. To help adjust for discrepancies between premiums and program costs, some health insurance programs have established reserve accounts, which may be used to defray future premium increases or cover unexpected shortfalls from higher-than-anticipated costs. For example, as noted earlier, the Office of Personnel Management administers a reserve account for each FEHBP plan, including BCBS. These reserve accounts are funded by a surcharge of up to 3 percent of a plan’s premium. Once funds in the reserve accounts exceed certain minimum balances, they can be used to offset future year premium increases. Similarly, some health insurance programs make adjustments to premiums for subsequent years that account for any significant discrepancy between prior year premiums and program costs. The law governing TRS contains no provision for the establishment of a reserve account or for methods of increasing or decreasing premiums, after they are set, to address differences between premiums and costs in prior years. DOD’s estimated costs of providing TRS benefits were about 11 times higher than its reported costs. DOD’s cost projections were too high largely because it overestimated the number of reservists who would enroll in TRS as well as the associated cost per plan of providing benefits through the program. DOD officials told us that they considered TRS cost and enrollment data when developing future year projections of program costs and enrollment levels, but they chose not to use these data as part of their projections because they are uncertain that prior year enrollment and cost data are indicative of future year costs and enrollment levels. DOD significantly overestimated the costs of providing benefits through TRS. Prior to TRS’s implementation, DOD estimated that total costs of providing benefits through the program would amount to about $70 million in fiscal year 2005 and about $442 million in fiscal year 2006. In contrast, reported costs in those years only amounted to about $5 million and about $40 million, respectively. DOD estimated the program’s likely costs by multiplying the number of TRS plans that it projected would be purchased by DOD’s estimated cost per plan for individual and family plans. DOD estimated that its cost per plan would be equal to the total TRS premium minus the portion of the premium paid by enrollees. The number of reservists who purchased TRS coverage has been significantly lower than DOD projected, and as a result TRS program costs have also been lower than expected. DOD projected that about 114,000 reservists would purchase individual or family plans by 2007; however, as of June 2007 only about 11,500—or about 10 percent—of that number had purchased TRS plans. Over 90 percent of TRS enrollment had been for coverage under tier 1, which offered the lowest enrollee premium contributions of the three tiers in existence at the time covered by our analysis. Very few reservists signed up for coverage under tier 3, which had the highest enrollee premium contributions of the three tiers. (See table 3.) DOD estimated the number of reservists who would purchase TRS coverage by dividing the population of reservists who qualify for each of the three tiers into several categories for which it estimated distinct participation rates, based on the premiums these reservists would likely pay for non-DOD health insurance. DOD projected lower enrollment for groups that had access to less expensive health insurance options, such as those who are offered insurance through their employers. DOD officials believe that enrollment in TRS will increase the longer the program is in place. However, while enrollment in TRS increased moderately through October 2006, it has remained relatively stable from October 2006 through June 2007. (See fig. 2.) In addition to the estimated number of plans purchased, the other major factor that affected DOD’s projection of overall TRS program costs was its estimate of the cost of providing benefits for each TRS plan. As previously stated, DOD based its estimated cost per plan on the total TRS premium minus the portion of the premium paid by enrollees. Because the premiums have been higher than DOD’s reported costs, DOD’s cost projections have also been too high. DOD developed a new model to project enrollment levels and program costs under TRS’s single-tiered premium structure that went into effect on October 1, 2007; however, DOD’s projection of future TRS enrollment levels is likely too high. DOD projected that the total number of TRS plans for individual and family coverage would be approximately 64,000 in fiscal year 2008 at a cost to the department of about $381 million for that year. However, actual TRS enrollment data to date suggest that total TRS enrollment—and therefore program costs—are unlikely to be as high as DOD projected. As of June 2007, there were about 11,500 TRS plans—well below DOD’s projection of about 114,000. Enrollment will almost certainly increase to some extent because reservists who previously only qualified for tier 2 or tier 3 of the program—which required enrollees to pay a larger portion of the premium—have qualified for the significantly lower tier 1 enrollee premiums since October 1, 2007. However, the degree to which it will increase is not clear. DOD officials told us that they considered TRS cost and enrollment data when developing future year projections of program costs and enrollment levels, but they chose not to use these data as part of their projections because of uncertainty about whether they would provide an accurate indication of likely future experience. DOD’s past enrollment projections, made without the benefit of prior year enrollment data, were significantly higher than actual enrollment levels. Although DOD intended that TRS premiums would be equal to the expected costs per plan of providing the benefit, DOD set premiums for the program based on BCBS premiums that proved to be significantly higher than the program’s average reported costs per plan in 2006. Reservists’ portion of TRS premiums would have been lower in 2006 if DOD had aligned premiums with the cost of providing TRS benefits. DOD officials told us that the department planned to continue basing TRS premium adjustments on BCBS premium adjustments in the immediate future, but the regulation governing TRS premium adjustments allows for the possibility that the department might change its methodology at some point in the future. However, because TRS premiums were higher than the average costs per plan in 2006, continuing to adjust TRS premiums based on BCBS premium adjustments could widen the gap between TRS premiums and the average costs per plan. The discrepancy between TRS premiums and the reported program costs per plan results from the approach DOD used in setting TRS premiums. Basing TRS premiums on BCBS premiums is problematic because of several dissimilarities between the two programs. Most important, the average cost data now available suggest that TRS enrollees have incurred significantly lower health care costs than BCBS enrollees, even after adjusting for certain demographic characteristics. In addition, BCBS premiums may be based on more than program costs, whereas TRS premiums are intended to cover only costs. Basing TRS premiums on BCBS premiums may have been reasonable at the time that TRS was first implemented in 2005 due to the lack of available data on the cost of providing benefits through TRS. However, cost data that reflect actual experience under the program are now becoming available, and limitations associated with them should decrease over time as DOD gains more experience with the program and more reservists enroll in it. These data will provide DOD with an improved basis for setting premiums in future years, and allow the department to eventually eliminate its reliance on BCBS premiums. Nonetheless, due to the uncertainty associated with predicting future health care costs, premiums are unlikely to exactly match program costs, even when they are based on cost data from prior years. Other insurance programs have methods to address discrepancies between premiums and program costs, which are not provided to DOD in the law governing TRS. DOD has also had difficulty accurately estimating the likely cost of providing TRS benefits in large part because it overestimated the number of reservists who would likely purchase TRS coverage. Over time, the availability of actual cost and enrollment data should help DOD improve its projections for future years. With the goal of eventually eliminating reliance on BCBS premiums and to better align premiums with the costs of providing TRS health care benefits, we recommend that the Secretary of Defense direct the Assistant Secretary for Health Affairs to stop basing TRS premium adjustments only on BCBS premium adjustments and use the reported costs of providing benefits through the TRS program when adjusting TRS premiums in future years as limitations associated with the reported cost data decrease. We also recommend that DOD explore options for addressing instances in which premiums have been either significantly higher or lower than program costs in prior years, including seeking legislative authority as necessary. We received written comments on a draft of this report from DOD. DOD stated that it concurs with our conclusions and recommendations and that it is committed to improving the accuracy of TRS premium projections. It further stated that our recommendations are consistent with DOD’s strategy to evolve the process, procedures, and analytical framework used to adjust TRS premiums as the quality and quantity of reported cost data improve. DOD’s written comments are reprinted in appendix III. We are sending copies of this report to the Secretary of Defense and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The John Warner National Defense Authorization Act (NDAA) for Fiscal Year 2007 required that we describe how increases in TRICARE Reserve Select (TRS) premiums compare with the Department of Defense’s (DOD) annual rate of medical care price inflation. As discussed with the committees of jurisdiction, this appendix compares DOD’s January 2006 TRS premium increase and DOD’s proposed January 2007 TRS premium increase with DOD’s estimated annual rate of medical care price inflation in fiscal years 2005 and 2006 as well as the medical component of the Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W). Premiums for TRS were first established when the program was implemented in April 2005. To keep pace with rising health care costs, DOD originally designed TRS premiums so that they are adjusted each year based on annual adjustments in the Federal Employees Health Benefits Program’s Blue Cross and Blue Shield (BCBS) Standard plan premiums. DOD planned to continue using this method to adjust premiums in the immediate future, although program regulations allow some flexibility in setting the premiums. Accordingly, in line with BCBS, TRS premiums increased by 8.5 percent in January 2006. Based on increases in BCBS, TRS premiums would have increased by 1 percent in January 2007. However, the NDAA for Fiscal Year 2007 froze 2007 premiums through September 30, 2007, at the rates for calendar year 2006. DOD calculated its average annual rate of medical care inflation to be about 4.9 percent in fiscal year 2005 and about 4.7 percent in fiscal year 2006. DOD did not develop these estimates of inflation based on its own spending. Instead, DOD based the estimates on inflation rates provided annually by the Office of Management and Budget for the various components of the TRICARE operating budget, such as military personnel, private sector health care, and pharmacy. In contrast, the medical component of the CPI-W increased at lower rates than DOD’s rate of medical care price inflation. The medical care component of the CPI-W increased by about 4.1 percent in 2005 and about 4.2 percent in 2006. The medical care component of the CPI-W is based on medical expenses, but it is problematic to compare to DOD’s estimated rate of medical care inflation because it is based only on out-of-pocket medical expenditures paid by consumers, including health insurance premiums, and excludes the medical expenditures paid by public and private insurance programs. Comparing premium growth trends with DOD’s annual rate of medical care price inflation and the medical care component of the CPI-W is problematic because of differences in each measurement. Unlike medical care price inflation, premium growth may reflect factors such as changes in the comprehensiveness of the policy, changes in the ratio of premiums collected to benefits paid, changes in costs because of increased utilization of health care services, contributions to or withdrawals from plan reserves, and profits. To compare the annual TRS premiums established by DOD to the reported average costs per plan of providing benefits under TRS in 2006, we reviewed DOD’s reported TRS enrollment data and data on the cost of providing TRS benefits through TRICARE-authorized civilian providers or hospitals, data on the administrative costs associated with providing TRS benefits, and data on the costs of providing TRS benefits through military treatment facilities (MTF). Using DOD’s data, we calculated the average cost per TRS plan of providing individual and family coverage as the sum of the reported costs divided by the average number of TRS plans. We also reviewed legislation relevant to the TRS program and literature on setting health insurance premiums and interviewed several experts from the fields of health economics and finance and DOD officials in the TRICARE Management Activity and the Office of the Assistant Secretary for Health Affairs. We limited our analysis to calendar year 2006 because some 2007 data are still incomplete and because 2005 average cost data in some months are based on a very small number of enrollees. At the time covered by our analysis, TRS included three tiers of eligibility with enrollees paying different portions of the premium based on the tier for which they qualified. We limited our analysis to tier 1 because it included over 90 percent of TRS plans and because tier 1 enrollee premium levels have applied to the entire TRS program since October 2007. We are unable to report the average cost per plan for tiers 2 and 3 separately, due to the low number of enrollees in these tiers. To compare DOD’s projected costs for the TRS program before implementation to DOD’s reported costs for the program in 2005 and 2006, we reviewed the analyses prepared by DOD before TRS’s implementation that projected (1) the number of individual and family plans in each tier of the TRS program and (2) the costs per plan of providing the TRS benefit. These projections were the two major factors used by DOD to estimate TRS costs. We compared these data with reported TRS enrollment and cost data from April 2005 through June 2007. In reporting the results of our comparison we use cost data through 2006 only, because some cost data for 2007 were incomplete. We also reviewed DOD internal documents and interviewed DOD officials. To determine the average cost of providing benefits under TRS for 2006— for individual and family plans—we reviewed TRS enrollment data and TRS purchased care cost data, administrative cost data, and data on the costs of providing TRS benefits through MTFs, each of which were provided to us by DOD. DOD officials provided TRS enrollment data to us in the form of multiple reports from the Defense Enrollment Eligibility Reporting System for each month from May 2005 through June 2007. Each report lists the number of TRS plans and enrollees in individual and family plans broken down by tier. Using these reports, we calculated the average number of TRS plans and enrollees in each month. For each month, from May 2005 through June 2007, we calculated the total costs of providing benefits under TRS by adding the cost components reported by DOD, which consist of purchased care costs, MTF costs, and administrative costs. Administrative costs were further divided among costs associated with claims processing and separate administrative fees levied by certain TRICARE managed care support contractors for each enrollee in each month. For each month, we calculated the average cost per TRS plan for individual and family coverage by dividing the total costs of providing benefits under TRS by the average number of TRS plans. We determined the average cost of providing benefits under TRS in 2006—for single and family plans—by summing the monthly averages and weighting them by enrollment in each month. To ensure that the DOD data were sufficiently reliable for our analyses, we conducted detailed data reliability assessments of the data sets that we used. We restricted these assessments, however, to the specific variables that were pertinent to our analyses. We reviewed DOD data that we determined to be relevant to our findings to assess their quality and methodological soundness. Our review consisted of (1) examining documents that describe the respective data, (2) manually and electronically checking the data for obvious errors and missing values, (3) interviewing DOD officials to inquire about concerns we uncovered, and (4) interviewing DOD officials about internal controls in place to ensure that data are complete and accurate. Our review revealed minor inconsistencies in DOD’s data that we reported to DOD officials. Overall, however, we found that all of the data sets used in this report were sufficiently reliable for use in our analyses. However, we did not independently verify DOD’s figures. We conducted our work from May 2007 through October 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Thomas Conahan, Assistant Director; Krister Friday; Adrienne Griffin; William Simerl; and Michael Zose made key contributions to this report.
(DOD) TRICARE Reserve Select (TRS) program allows most reservists to purchase coverage under TRICARE, the military health insurance program, when not on active duty. DOD intends to set premiums at a level equal to the expected costs of providing TRS benefits. The National Defense Authorization Act for 2007 required GAO to review TRS costs. As discussed with the committees of jurisdiction, GAO compared (1) the TRS premiums established by DOD to the reported costs of providing benefits under TRS in 2006 and (2) DOD's projected costs for TRS before implementation to DOD's reported costs for the program in 2005 and 2006. To do this work, GAO examined DOD analyses and interviewed DOD officials and external experts. In 2006, the premium for both individual and family coverage under TRS--which DOD based on Blue Cross and Blue Shield (BCBS) premiums--exceeded the reported average cost per plan of providing TRICARE benefits through the program. TRS currently serves less than 1percent of the overall TRICARE population, and unlike most other TRICARE beneficiaries, TRS enrollees pay a premium to receive health care coverage. At the time of GAO's analysis, TRS consisted of three tiers, established by law, with reservists in each tier paying different portions of the total premium, based on the tier for which they qualified. Over 90 percent of reservists who purchased TRS coverage enrolled in tier 1. The premium for individual coverage under tier 1 was 72 percent higher than the average cost per plan of providing benefits through the program. Similarly, the premium for family coverage under tier 1 was 45 percent higher than the average cost per plan of providing benefits. DOD based TRS premiums on BCBS premiums because, at the time DOD was developing TRS, actual data on the costs of TRS did not exist; however, these data are now available. Had DOD been successful in establishing premiums that were equal to the cost of providing benefits in 2006, the portion of the premium paid by enrollees in tier 1--which is set by law to cover 28 percent of the full premium--would have been lower that year. Reasons that TRS premiums did not align with benefit costs included differences between the TRS and BCBS populations and differences in the way the two programs are designed, which DOD did not consider in its methodology. According to experts, the most successful methods for aligning premiums with actual program costs involve using program cost data when setting premiums. The regulation governing TRS premium adjustments allows DOD to use either BCBS premiums or other means as the basis for TRS premiums. However, DOD officials told GAO that they plan to continue, at least for the near future, to base TRS premiums on BCBS premiums because of limitations associated with using currently available data to predict future TRS costs. However, these limitations should decrease over time as DOD gains more experience with the program and enrollment increases. Nonetheless, due to the uncertainty associated with predicting future health care costs, premiums are unlikely to exactly match program costs, even when they are based on cost data from prior years. Other insurance programs have methods to address differences between premiums and program costs, which are not provided to DOD in the law governing TRS. DOD overestimated the total cost of providing benefits through TRS. While the department projected that its total costs would amount to about $70 million in fiscal year 2005 and about $442 million in fiscal year 2006, DOD's reported costs in those years were about $5 million and about $40 million, respectively. DOD's cost projections were too high largely because it overestimated the number of reservists who would purchase TRS and the associated cost per plan of providing TRS benefits. DOD officials told GAO that they chose not to use TRS cost and enrollment data when projecting future year program costs and enrollment levels because of uncertainty about whether they would provide an accurate indication of future experience.
When EESA was enacted on October 3, 2008, the U.S. financial system was facing a severe crisis that rippled throughout the global economy, moving from the U.S. housing market to the financial markets and affecting an array of financial assets and interbank lending. The crisis restricted access to credit and made the financing on which businesses and individuals depended increasingly difficult to obtain. Further tightening of credit exacerbated a global economic slowdown. During the crisis, Congress, the President, federal regulators, and others took a number of steps to facilitate financial intermediation by banks and the securities markets. While the financial system has generally stabilized and investor confidence has improved, some concerns persist that global demand will remain weak for a significant period of time, and that central bank efforts to combat inflation could disrupt financial markets. Under EESA, Treasury established a variety of TARP programs. American International Group, Inc. (AIG) Investment Program. Provided support to AIG to avoid disruptions to financial markets as the insurer’s financial condition deteriorated. Asset Guarantee Program. Provided federal government assurances for assets held by financial institutions that were viewed as critical to the functioning of the nation’s financial system. Bank of America and Citigroup were the only two participants in this program. AIFP. Aimed to prevent a significant disruption of the American automotive industry through government investments in domestic automakers Chrysler and GM and auto financing companies Chrysler Financial and Ally Financial (formerly known as General Motors Acceptance Corporation, or GMAC). Capital Assessment Program. Created to provide capital to institutions not able to raise it privately to meet Supervisory Capital Assessment Program—or “stress test”—requirements. This program was never used. CPP. The largest TARP program, designed to provide capital investments to financially viable financial institutions. Treasury received preferred shares and subordinated debentures, along with warrants. Consumer and Business Lending Initiative programs. The AIG Investment Program was formerly known as the Systemically Significant Failing Institutions Program. Community Development Capital Initiative (CDCI.) Provided capital to Community Development Financial Institutions (CDFI) by purchasing preferred stock and subordinated debentures. Small Business Administration (SBA) 7(a) Securities Purchase Program. Provided liquidity to secondary markets for government- guaranteed small business loans in SBA’s 7(a) loan program. Term Asset-backed Securities Loan Facility (TALF). Provided liquidity in securitization markets for various asset classes to improve access to credit for consumers and businesses. Public-Private Investment Program (PPIP). Created to address the challenge of “legacy assets” as part of Treasury’s efforts to repair balance sheets throughout the financial system. Treasury partnered with private funds that purchased residential and commercial mortgage-backed securities. Targeted Investment Program (TIP). Sought to foster market stability and strengthen the economy by making case-by-case investments in institutions that Treasury deemed critical to the functioning of the financial system. Bank of America and Citigroup were the only two institutions that participated in this program. Many of these programs are winding down or have ended. For example, as of September 30, 2014, Treasury had recovered all debt and equity investments made in PPIP. Furthermore, the AIG Assistance Program, the Asset Guarantee Program, the Capital Assessment Program, the SBA 7(a) Securities Purchase Program, and TIP are no longer active, and Treasury no longer holds assets related to these programs. Treasury still holds investments in CPP, CDCI, and AIFP, and the housing assistance programs remain active. The housing programs include: Making Home Affordable (MHA). MHA includes several housing programs, but the cornerstone is the Home Affordable Modification Program (HAMP), under which Treasury shares the cost of reducing monthly payments on first-lien mortgages with mortgage holders/investors and provides other financial incentives to servicers, borrowers, and mortgage holders/investors for loans modified under the program. There are several other programs that operate under MHA: Home Affordable Foreclosure Alternatives (HAFA) Program. The HAFA Program offers assistance to homeowners looking to relinquish their homes through a short sale or a deed-in-lieu of foreclosure. Treasury offers incentives to eligible homeowners, servicers, and investors under the program. Principal Reduction Alternative (PRA). PRA, a companion program to HAMP, requires servicers to evaluate the benefit of principal reduction for mortgages being assessed for a HAMP first-lien loan modification that have a loan-to-value ratio of 115 percent or more and that are not owned or guaranteed by Fannie Mae or Freddie Mac. Servicers are required to evaluate homeowners for PRA when evaluating them for a HAMP first-lien modification but are not required to actually reduce principal as part of the modification. Second Lien Modification Program (2MP). 2MP provides additional assistance to homeowners receiving a HAMP first-lien permanent modification who have an eligible second lien with participating servicers. When a borrower’s first lien is modified under HAMP, participating program servicers must offer to modify the borrower’s eligible second lien according to a defined protocol. This assistance can result in a modification or even full or partial extinguishment of the second lien. Treasury provides incentive payments to second lien mortgage holders in the form of a percentage of each dollar in principal reduction on the second lien. Treasury doubled the incentive payments offered to second lien mortgage holders for 2MP permanent modifications that included principal reduction and had an effective date on or after June 1, 2012. Government-insured or guaranteed loans (Federal Housing Administration (FHA-HAMP) and Rural Development (RD- HAMP)). FHA and the Department of Agriculture’s Rural Housing Service (RHS) have implemented modification programs similar to HAMP Tier 1 for FHA-insured and RHS-guaranteed first-lien mortgage loans. RD-HAMP provides borrowers with a monthly mortgage payment equal to 31 percent of the homeowners’ monthly gross income and FHA-HAMP provides for payment reduction based on a formula that considers gross income options (31 percent and 25 percent) and current payment (80 percent). Both programs require borrowers to complete a trial payment plan before permanent modification. If a modified FHA-insured or RHS- guaranteed mortgage loan meets Treasury’s eligibility criteria, the borrower and servicer can receive TARP-funded incentive payments from Treasury. Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund). The Hardest Hit Fund seeks to help homeowners in the states hit hardest by unemployment and house price declines by funding innovative measures developed by state housing finance agencies and approved by Treasury. By September 2010, Treasury had completed the distribution of $7.6 billion in funds across 18 states and the District of Columbia.for funding either because their unemployment rates were at or above the national average or they had experienced housing price declines of 20 percent or more that left some borrowers owing more on their mortgages than the value of their homes. Although the type of assistance provided varies by state, all states use some portion of their funds to help unemployed homeowners make mortgage payments. Some states have programs that reduce principal to help make mortgage payments more affordable, reduce or eliminate borrowers’ second liens, and provide transition assistance to borrowers leaving their homes. Department of Housing and Urban Development’s (HUD) FHA Short Refinance program (FHA Short Refinance). FHA Short Refinance enables homeowners whose mortgages exceed the value of their homes to refinance into more affordable mortgages. Treasury continues to make progress in winding down the TARP nonhousing programs. According to Treasury, its decision to exit a program depends on various circumstances, including market conditions and other factors outside the government’s control. Treasury estimates that some nonhousing programs have produced, or will produce, a lifetime income while others have, or are expected to have, a lifetime cost. For example, repayments and income from the federal government’s investments in participating CPP institutions have exceeded the original amounts. Further, Treasury’s estimates of the lifetime costs of CDCI have been falling significantly as participants exit the program. However, under AIFP Treasury sold its stock in GM at a loss, although it will make a profit on the sale of its investments in Ally Financial. Treasury expects lifetime income from two other programs— TALF and PPIP. As of September 30, 2014, Treasury had completed the wind down of four of nine TARP nonhousing programs that were once active. Treasury has stated that when deciding to sell assets and exit TARP programs, it strives to: protect taxpayer investments and maximize overall investment returns within competing constraints, and promote the stability of financial markets and the economy by preventing disruptions to the financial system; bolster markets’ confidence in order to encourage private capital dispose of investments as soon as practicable. We and others have noted that these goals can at times conflict.example, we previously reported that deciding to unwind some of its assistance to GM by participating in an initial public offering (IPO) presented Treasury with a conflict between maximizing taxpayer returns and exiting as soon as practicable. Holding its shares longer could have meant realizing greater gains for the taxpayer but only if the stock appreciated in value. By participating in GM’s November 2010 IPO, Treasury tried to fulfill both goals, selling almost half of its shares at an early opportunity. Treasury officials stated that although they strove to balance these competing goals, they had no strict formula for doing so. Rather, they ultimately relied on the best available information in deciding when to start exiting this program. Moreover, in some cases Treasury’s ability to exercise control over the timing of its exit from TARP programs is limited. For example, Treasury has limited control over its exit from CDCI because the program’s exit depends on when each financial institution decides to repay Treasury’s investments. For As shown in table 1, Treasury estimates that several of the TARP nonhousing programs will provide or have provided income over their lifetimes, while others will incur a lifetime cost. Though direct costs for TARP—including potential lifetime income—can be estimated and quantified, certain indirect costs connected to the government’s assistance are less easily measured. For example, as we have previously concluded, when the government provides assistance to the private sector, it may increase moral hazard that would then need to be mitigated. That is, in the face of government assistance, private firms are motivated to take risks they might not take in the absence of such assistance, or creditors may not price into their extensions of credit the full risk assumed by the firm, believing that the government would provide assistance should the firm become distressed. Government interventions can also have consequences for the banking industry as a whole, including institutions that do not receive bailout funds. For instance, investors may perceive the debt associated with institutions that received government assistance as being less risky because of the potential for future government bailouts. This perception could lead them to choose to invest in such assisted institutions instead of those that did not receive assistance. Treasury continues to wind down CPP, the largest TARP investment program, which was designed to provide capital investments to viable financial institutions, and thus far, repayments and income have exceeded the total amount of original outlays. As we have reported, Treasury disbursed $204.9 billion to 707 financial institutions nationwide from October 2008 through December 2009. As of September 30, 2014, Treasury had received $226.4 billion in repayments and income from its CPP investments, exceeding the amount originally disbursed by $21.5 billion (see fig. 1). The repayments and income amounts include $199.4 billion in repayments and sales of original CPP investments, as well as $12.1 billion in dividends and interest, and $14.9 billion in proceeds in excess of costs, which includes $8.0 billion from the sale of warrants. After accounting for write-offs and realized losses from sales totaling $4.9 billion, CPP had $625 million in outstanding investments as of September 30, 2014. Treasury estimated a lifetime income of $16.1 billion for CPP as of September 30, 2014. As of September 30, 2014, a total of 664 of the 707 institutions (94 percent) that originally participated in CPP had exited the program. Of these, 253 had repurchased their preferred shares or subordinated debentures in full (see fig. 2). Another 165 institutions refinanced their shares through other federal programs: 28 through CDCI and 137 through another Treasury fund—separate from TARP—the Small Business Lending Fund (SBLF).investments sold through auction or other sales, and 30 institutions went into bankruptcy or receivership. The remaining 4 merged with another institution. Treasury created the CDCI program to help mitigate the adverse impacts of the financial crisis on communities underserved by traditional banks by providing capital to CDFIs—banks and credit unions—that provide financial services to low- and moderate-income, minority, and other underserved communities. CDCI, which is structured much like CPP, provides capital to financial institutions by purchasing preferred equity and subordinated debt from them.original 84 CDCI institutions remained in the program. Of the 16 institutions that had exited the program, 15 had done so through repayment and 1 had done so as a result of its subsidiary bank’s failure. Three of the 68 remaining institutions had begun to repay the principal on investments they had received, while the other remaining institutions had paid only dividends and interest. As of September 30, 2014, 68 of the As of September 30, 2014, the outstanding investment balance for CDCI was $465 million of an original investment of $570 million. As of the same date, Treasury had received approximately $98 million in principal repayments from CDCI participants, and had written off approximately $7 million. CDCI participants have also paid approximately $43 million in dividends and interest. Treasury has lowered its estimates of the program’s lifetime cost over the last 3 years as market conditions have improved and institutions have begun to repay their investments. As of November 2010, Treasury estimated the program’s lifetime cost at about $290 million. As of September 30, 2014, Treasury estimated the program’s lifetime cost at $110 million. As we have reported, Treasury continues to monitor the performance of CDCI participants because their financial strength will affect their ability to repay Treasury. Generally, the number of CDCI institutions with missed quarterly dividend or interest payments has been low, representing, on average, about 4 percent of all remaining institutions over the life of the program. The percentage of remaining institutions with missed payments has ranged from 0 to about 7 percent (0 to 6 institutions). Since November 2010 (the first quarter that dividend and interest payments were due), nine institutions (seven banks and two credit unions) have missed at least one quarterly payment. Of those institutions, three banks have missed at least eight payments, the threshold at which Treasury has the right to elect directors to their boards. However, as of September 30, 2014, Treasury had not appointed directors to the board of any CDCI banks, but it had sent an observer to one bank and asked to send an observer to a second. In an effort to preserve their capital and promote safety and soundness, federal and state regulators generally do not allow institutions on these lists to make dividend payments. As of September 30, 2014, Treasury was continuing to assess exit alternatives for the CDCI program. Treasury had not yet determined a final exit strategy and associated timing and has limited control over participants’ decision to exit. As we previously reported, CDCI participants said that the 2 percent dividend rate they would pay on investments until 2018 was lower than the cost of private capital and that access to capital would play a major factor in their decision to repay their CDCI investments. The dividend rate will increase from 2 percent to 9 percent in 2018 and may be a key factor for many CDCI participants in the decision to stay in or exit the program. Treasury disbursed $79.7 billion through AIFP from December 2008 through June 2009 to support two automakers, Chrysler and GM, and their automotive finance companies, Chrysler Financial and Ally Financial (then known as GMAC). As of September 30, 2014, the government had recovered $68.9 billion (86.3 percent) of the funds disbursed through the program, and expects AIFP to have a lifetime cost of $12.2 billion. Chrysler. In May 2011, Chrysler repaid its outstanding TARP loans, 6 years ahead of schedule. Chrysler returned more than $11.1 billion of $12.4 billion committed to Chrysler through principal repayments, interest, and cancelled commitments. Treasury has fully exited its investment in Chrysler Group under TARP. GM. On December 9, 2013, Treasury fully exited its investment in GM. Treasury completed its fourth and final pre-arranged trading plan for the sale of its remaining 31.1 million shares. Treasury recovered a total of $39.7 billion from its original investment of $51.0 billion in GM. Chrysler Financial. In July 2009, Chrysler Financial repaid its $1.5 billion in TARP loans plus around $7 million in interest. Chrysler Financial has since ceased operations. Also through AIFP, Treasury provided $17.2 billion of assistance to Ally Financial, a large financial holding company, whose primary business is auto financing. To provide this assistance, Treasury purchased senior equity, mandatory convertible preferred shares, and trust preferred securities, some of which Treasury ultimately converted into common shares. By December 2010, Treasury held common shares totaling 74 percent of Ally financial as well as $5.9 billion in mandatory convertible preferred shares. Treasury retained this level of ownership through the third quarter of 2013. Then, in late 2013, three key regulatory and legal developments helped Treasury accelerate the wind down of its investments in Ally Financial (see text box). As a result, in November 2013, Ally Financial made cash payments totaling $5.9 billion to repurchase all remaining mandatory convertible preferred shares outstanding and terminate an existing share adjustment provision. Additionally, Ally Financial issued $1.3 billion of common equity to third- party investors, reducing Treasury’s ownership share from 74 to 63 percent. The Board of Governors of the Federal Reserve System (Federal Reserve) did not object to Ally Financial’s resubmitted capital plan, allowing Ally Financial to complete the private placement of common shares valued at $1.3 billion, which it had announced in August 2013. The private placement, intended in part to help finance the repurchase of the $5.9 billion remaining Treasury-owned mandatory convertible preferred shares, was completed in November 2013, and the Treasury shares were repurchased. In December 2013, the bankruptcy of Ally Financial’s subsidiary ResCap was substantially resolved. The final bankruptcy agreement included a settlement that the bankruptcy court judge had approved in June 2013. It released Ally Financial from any and all legal claims by ResCap and, with some exceptions, all other third parties, in exchange for $2.1 billion in cash from Ally Financial and its insurers. Also in December 2013, Ally Financial obtained Federal Reserve approval to convert from a bank holding company to a financial holding company, enabling it to continue insurance underwriting and other Because of the positive results of the March 2014 stress test and Comprehensive Capital Analysis and Review (CCAR) conducted by the Board of Governors of the Federal Reserve System (Federal Reserve) on Ally Financial, Treasury decided to further reduce its ownership share.The day after the release of the CCAR results in March 2014, Treasury announced that it would sell Ally Financial common stock in an initial public offering (IPO). In April 2014, Treasury completed the IPO of 95 million shares at $25 per share. The $2.4 billion sale reduced Treasury’s ownership share to approximately 17 percent. Following the IPO, Ally Financial became a publicly held company. In May 2014, Treasury received $181 million from the sale of additional shares after underwriters exercised the option to purchase an additional 7 million shares from Treasury at the IPO price. This additional sale reduced Treasury’s ownership share to approximately 16 percent. In September 2014, Treasury announced the completion of its first trading strategy for Ally common stock. With this plan, Treasury sold 8.89 million shares and recovered approximately $218.7 million, further reducing its ownership share to 13.8 percent (around 64 million shares of common stock). On September 12, 2014, Treasury announced that it would continue to wind down its investment in Ally by selling additional shares of common stock through its second pre-defined written trading plan, with sales beginning that day. As of September 30, 2014, Treasury had recovered approximately $18.1 billion in sales proceeds and interest and dividend payments on its total $17.2 billion investment in Ally Financial. On December 19, 2014, Treasury announced an agreement to sell all of its remaining common shares in Ally Financial. Treasury reported that it recovered $19.6 billion from Ally, which is approximately $2.4 billion more than its initial investment of $17.2 billion. This results in the exit of Treasury’s last outstanding investments in AIFP. The Federal Reserve established TALF in an effort to reopen the securitization markets and improve access to credit for consumers and Treasury committed funds to the TALF special-purpose businesses.vehicle, TALF LLC, established by the Federal Reserve Bank of New York (FRBNY) to provide credit protection to FRBNY for TALF loans should borrowers fail to repay and surrender the asset-backed securities (ABS) or commercial mortgage-backed securities (CMBS) pledged as collateral. Treasury disbursed $100 million for start-up costs to TALF LLC (see fig. 5). TALF LLC repaid Treasury’s initial $100 million disbursement in 2013, and as of September 30, 2014, had accumulated $759 million in income from the TALF loans, of which $632 million was paid to Treasury as contingent interest. On November 6, 2014, the net portfolio holdings of TALF LLC were reduced to zero, and TALF LLC together with the other TALF agreements were terminated, effectively closing the TALF program. To create PPIP, Treasury partnered with private funds that purchased troubled mortgage-related assets (“legacy assets”) from financial institutions in order to help repair balance sheets throughout the financial system. The program’s Public-Private Investment Funds (PPIF) each had a 3-year investment period that began at each fund’s inception date, and at the completion of the investment period each fund had 5 years to completely divest. Treasury provided these PPIFs with equity and loan commitments of $7.4 billion and $14.7 billion, respectively, but disbursed a total of $18.6 billion (see fig. 6). On September 30, 2014, Treasury received its final $1.8 million distribution from the PPIP. With this distribution all nine PPIFs have completely divested their assets and Treasury recovered a total of $22.5 billion. According to Treasury officials, as of December 29, 2014, all nine PPIFs had formally provided Treasury official termination notices and the PPIP program had been effectively wound down. As of September 30, 2014, Treasury had disbursed $13.7 billion (36 percent) of the $38.5 billion in TARP funds that had been allocated to support housing programs. The number of new HAMP permanent modifications added on a quarterly basis rose slightly in early 2013 but has declined in 2014, falling to 29,000 in the third quarter, the lowest level since the program’s inception. Treasury has taken steps to help more borrowers, including by extending the deadline for HAMP applications for a third time. Treasury’s Office of Homeownership Preservation within OFS is tasked with finding ways to help prevent avoidable foreclosures and preserve homeownership. Treasury established three initiatives under TARP to address these issues: MHA, the Hardest Hit Fund, and FHA Short Refinance. As of September 30, 2014, Treasury had disbursed approximately $13.7 billion (36 percent) of the $38.5 billion in TARP housing funds, though the amount of disbursements varied across the three programs (see fig. 7). For example, of the $29.8 billion dedicated to MHA, the largest TARP-funded housing program, Treasury had disbursed $9.3 billion (31 percent) as of September 2014. In the case of the Hardest Hit Fund, $7.6 billion (59 percent) had been disbursed as of that date. In contrast, only $0.01 billion (0.13 percent) had been disbursed for the FHA Short Refinance program. As we have reported, Treasury officials said that they anticipated using all of the remaining MHA funds, and in April 2014, the Congressional Budget Office (CBO) increased its estimate of likely disbursements under TARP-funded housing programs because of extensions of the MHA program. But CBO has continued to project an $11 billion dollar surplus for the TARP-funded housing programs because it anticipates that fewer households will participate in the housing programs. Treasury will continue to disburse TARP funds under the housing programs for several more years. Specifically, homeowners have until at least December 31, 2016, to apply for assistance under MHA programs, and Treasury will continue to pay incentives for up to 6 years after the last permanent modification begins. Treasury’s obligation under FHA Short Refinance will continue until September 2020. Unlike TARP expenditures under some other programs, such as those that provided capital infusions to banks, expenditures under these programs are generally direct outlays of funds with no provision for repayment. As of September 30, 2014, the estimated lifetime cost for the housing programs was $37.4 billion. The centerpiece of Treasury’s MHA program is HAMP, which seeks to help eligible borrowers facing financial distress avoid foreclosure by reducing their monthly first-lien mortgage payments to more affordable levels. Treasury announced HAMP (which originally included what is now called HAMP Tier 1) on February 18, 2009. At that time, Treasury projected that the program would help up to 3 million to 4 million borrowers who were at risk of default and foreclosure. However, we noted then that reaching the projected number of borrowers might be difficult for several reasons. In an effort to reach more borrowers, Treasury expanded HAMP to include HAMP Tier 2, which servicers began implementing in June 2012. Treasury also provides incentive payments to services, investors, and borrowers for modifications under HAMP Tier 1 and HAMP Tier 2. Since the HAMP first lien modification program began in 2009 through September 2014, there have been 2,246,680 trial modifications and 1,416,705 permanent modifications. These modifications resulted in a median monthly mortgage payment reduction of $490 per month. As shown in figure 8, HAMP participation, as measured by trial and permanent modifications started each quarter, peaked in early 2010, generally declined in 2011, and then held relatively steady through the middle of 2013. However, beginning with the third quarter of 2013, the number of new HAMP trial modifications began to decline, falling to 27,000 in the second quarter of 2014, then increasing slightly, to 33,000 in the third quarter of 2014—the most recent for which data are available. During the same period, the number of new HAMP permanent modifications declined steadily, from 46,000 in the third quarter of 2013 to 29,000 in the third quarter of 2014. As we have reported, according to Treasury, the decline in HAMP modifications is a reflection of the shrinking pool of HAMP-eligible mortgages, as evidenced in the declining number of 60-day-plus delinquencies reported by the industry. Treasury has taken steps to increase HAMP participation, including extending the program application deadline and making other program changes. First, in June 2014, Treasury announced the third extension of the program to at least December 31, 2016. With this extension, Treasury has increased the period for eligible borrowers—including the unemployed and those facing an increase in interest rates—to apply for assistance by 4 years from the initial program deadline of December 31, 2012. However, as we have reported earlier, the pool of mortgages eligible for HAMP programs is declining. Second, in September 2014 Treasury, in conjunction with HUD and the Ad Council, launched a new series of public service advertisements (PSA) to raise awareness of the free government resources available through MHA to assist struggling homeowners in avoiding foreclosure. The campaign includes television, print, radio, outdoor (billboards and other signage), and digital PSAs. According to Treasury, since the campaign was initially launched in 2010, media outlets have donated about $137 million in airtime and physical and digital space, and more than 16,000 outdoor or transit ads have been placed nationwide. Treasury officials said that calls to the Homeowner’s HOPETM Hotline increased by 20 percent during the first week of the campaign. Treasury attributes the increase to the September 2014 campaign effort. Lastly, in late 2014, Treasury released two Supplemental Directives. The first, Supplemental Directive 14-04, was issued in October 2014 and stated that the interest rate on a HAMP Tier 2 modification would be lowered effective January 1, 2015, to the weekly Freddie Mac Primary Mortgage Market Survey Rate minus 50 basis points (down from zero basis points). Treasury is lowering the rate in an effort to increase the population that is potentially eligible for HAMP and provide greater payment reduction. In particular, Treasury believes the lowered rate will allow more HAMP Tier 1 borrowers, who might struggle with an interest rate step-up under Tier 1, to qualify for a HAMP Tier 2 modification. On November 30, 2014, Treasury issued Supplemental Directive 14-05, which made several changes to HAMP that, among other things, will extend the pay-for-performance borrower incentives by an additional year, to a sixth year for the modification, and increases the amount of the incentive payment for that additional year to $5,000, up from $1,000 for years 1 to 5. While previously Tier 2 modifications were not eligible for pay-for-performance incentive payments, the changes in Supplemental Directive 14-05 apply to both Tier 1 and Tier 2 modifications, as well as to FHA-HAMP and RD-HAMP modifications. Among the other programs designed to help borrowers, HAFA has assisted the largest number of borrowers—approximately 169,000— through September 2014. Under HAFA, 162,498 short sales and 6,975 deeds-in-lieu had taken place, as of September 2014 PRA had provided an estimated $14.8 billion in principal reduction to borrowers, through 163,951 permanent loan modifications, with $68,861 in median principal reduction. Through 2MP, servicers reported starting about 141,697 second-lien modifications, of which 38,480 fully extinguished the second lien as of September 2014. Nearly 67,708 trial modifications were started that received Treasury FHA-HAMP incentives as of June 2014, and the median monthly payment reduction for active permanent modifications was $232. However, as of June 2014, only 187 modifications had been made that qualified for Rural Development (RD)-HAMP incentives. For the RD-HAMP loans, the median monthly payment reduction for active permanent modifications was $260. As discussed earlier, Treasury extended HAMP and all MHA programs until at least December 31, 2016. However, Treasury officials told us that they might decide at a future date to wind down some programs under MHA at an earlier date or to extend MHA beyond 2016. They added that their decision would be based on market conditions, program volume, and other factors. As of September 30, 2014, six states and the District of Columbia had closed their Hardest Hit Fund application process in anticipation of full commitment of program funds. As of that date, participating states and the District of Columbia had committed a total of $3.4 billion of the $7.6 billion dedicated to the program, and assisted a total of 207,511 homeowners. However, as we have reported, progress in disbursing funds and meeting state-level targets for household participation varied across states. As we also reported, state officials told us that, with Treasury’s help, they had confronted challenges related to staffing and infrastructure, servicer participation, borrower outreach, and program implementation In terms of an exit strategy and end date for the Hardest Hit Fund, state housing finance agencies must commit funds by December 31, 2017, but can continue to spend committed funds after that date. According to Treasury officials, currently there are no plans to extend the deadline for committing Hardest Hit Fund monies beyond 2017. However, Treasury will continue to evaluate that deadline over time, taking into account changing market conditions in Hardest Hit Fund areas, program performance, and other factors. Finally, FHA refinanced 4,963 loans between September 2010 and September 2014 through the FHA Short Refinance program. As of September 30, 2014, Treasury would pay a portion of claims in the event Through September 2014, of a default for 3,015 of those loans.Treasury had paid one claim of approximately $48,000 and spent approximately $10 million in administrative costs. The scheduled end date for the FHA Short Refinance program was December 31, 2014. However, on November 14, 2014, FHA extended the program for an additional 2 years, with all loans required to close on or before December 31, 2016. Treasury officials are evaluating whether the extension through 2016 will require an extension of Treasury’s line of credit. We provided a draft of this report to Treasury for review and comment. In its written comments, reproduced in appendix II, Treasury generally concurred with our findings. Treasury stated that it will continue its efforts to wind down the remaining investment programs while protecting taxpayers’ interests and maximizing returns and continue to implement TARP-funded housing programs, primarily through mortgage modifications and other assistance programs. Treasury also provided technical comments that we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees. This report will be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives in this report were to examine the condition and status of (1) nonhousing-related Troubled Asset Relief Programs (TARP) and (2) TARP-funded housing programs. To assess the condition and status of all the nonhousing-related programs initiated under TARP, we collected and analyzed data about program utilization and assets held, as applicable, focusing primarily on financial information that we had audited in the Office of Financial Stability’s (OFS) financial statements, as of September 30, 2014. In some instances we analyzed more recent, unaudited financial information. The financial information includes the types of assets held in the program, obligations that represent the highest amount obligated for a program (to provide historical information on total obligations), disbursements, and income. We also used OFS cost estimates for TARP that we audited as part of the financial statement audit. As part of the financial statement audit, we tested OFS’s internal controls over financial reporting. The financial information used in this report is sufficiently reliable to assess the condition and status of TARP programs based on the results of our audits of fiscal years 2009 through 2014 financial statements for TARP. Further, we reviewed the Department of the Treasury’s (Treasury) documentation such as press releases and reports on TARP programs and costs. Also, we interviewed OFS program officials and obtained information from them to determine the current status of each TARP program and to update what is known about exit considerations for TARP programs. In reporting on these programs and their exit considerations, we leveraged our previous TARP reports, as appropriate. In addition, we did the following: For the Capital Purchase Program, we used OFS’s reports to describe the status of the program, including which participants had begun repaying Treasury investments, the number of institutions that had exited the program, and the amount of dividends paid. In addition, we reviewed Treasury’s press releases on the program and interviewed Treasury officials. For the Community Development Capital Initiative, we interviewed program officials to determine what exit concerns Treasury has for the program. To update the status of the Automotive Industry Financing Program and Treasury’s plans for managing its investment in the companies, we reviewed information on Treasury’s plans for overseeing its financial interests in Ally Financial, including Treasury reports. We also interviewed officials from Treasury. For the Term Asset-Backed Securities Loan Facility, we interviewed OFS officials about their role in the program as it continues to unwind. To update the status of the Public-Private Investment Program, we analyzed program quarterly reports, term sheets, and other documentation related to the public-private investment funds. We also interviewed OFS staff responsible for the program to determine the status of the program while it remains in active investment status. To assess the status of TARP-funded housing programs, we reviewed Treasury reports, guidance, and documentation and interviewed Treasury officials, in addition to leveraging our recent work.determine the status of Treasury’s TARP-funded housing programs, we obtained and reviewed Treasury’s published reports on the programs, as well as guidelines and related updates issued by Treasury for each of the programs. In addition, we obtained information from and interviewed Treasury officials about the status of the TARP-funded housing programs. We conducted this performance audit from September 2014 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A. Nicole Clowers, (202) 512-8678 or [email protected]. In addition to the contacts named above, Marcia Carlsen, Lynda E. Downing, Harry Medina, and Karen Tremba (lead assistant directors); Kristeen McLain (Analyst-in-Charge), Bethany M. Benitez, Emily R. Chalmers, William R. Chatlos, Rachel DeMarcus, Alex Fedell, John A. Karikari, Dragan Matic, Marc Molino, and Jena Y. Sinkfield have made significant contributions to this report. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2014 and 2013 Financial Statements. GAO-15-132R. Washington, D.C.: November 7, 2014. Troubled Asset Relief Program: Treasury Could Better Analyze Data to Improve Oversight of Servicers’ Practices. GAO-15-5. Washington, D.C.: October 6, 2014. Troubled Asset Relief Program: Government’s Exposure to Ally Financial Lessens as Treasury’s Ownership Share Declines. GAO-14-698. Washington, D.C.: August 5, 2014. Community Development Capital Initiative: Status of the Program and Financial Health of Remaining Participants. GAO-14-579. Washington, D.C.: June 6, 2014. Troubled Asset Relief Program: Status of the Wind Down of the Capital Purchase Program. GAO-14-388. Washington, D.C.: April 7, 2014. Troubled Asset Relief Program: More Efforts Needed on Fair Lending Controls and Access for Non-English Speakers in Housing Programs. GAO-14-117. Washington, D.C.: February 6, 2014. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2013 and 2012 Financial Statements. GAO-14-172R. Washington, D.C.: December 12, 2013. Troubled Asset Relief Program: Status of Treasury’s Investments in General Motors and Ally Financial. GAO-14-6. Washington, D.C.: October 29, 2013. Troubled Asset Relief Program: GAO’s Oversight of the Troubled Asset Relief Program Activities. GAO-13-840R. Washington, D.C.: September 6, 2013. Troubled Asset Relief Program: Treasury’s Use of Auctions to Exit the Capital Purchase Program. GAO-13-630. Washington, D.C.: July 8, 2013. Capital Purchase Program: Status of the Program and Financial Health of Remaining Participants. GAO-13-458. Washington, D.C.: May 7, 2013. Troubled Asset Relief Program: Status of GAO Recommendations to Treasury. GAO-13-324R. Washington, D.C.: March 8, 2013. Troubled Asset Relief Program: Treasury Sees Some Returns as It Exits Programs and Continues to Fund Mortgage Programs. GAO-13-192. Washington, D.C.: January 7, 2012. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2012 and 2011 Financial Statements. GAO-13-126R. Washington, D.C.: November 9, 2012. Treasury Continues to Implement Its Oversight System for Addressing TARP Conflicts of Interest. GAO-12-984R. Washington, D.C.: September 18, 2012. Troubled Asset Relief Program: Further Actions Needed to Enhance Assessments and Transparency of Housing Programs. GAO-12-783. Washington, D.C.: July 19, 2012. Troubled Asset Relief Program: Government’s Exposure to AIG Lessens as Equity Investments Are Sold. GAO-12-574. Washington, D.C.: May 7, 2012. Capital Purchase Program: Revenues Have Exceeded Investments, but Concerns about Outstanding Investments Remain. GAO-12-301. Washington, D.C.: March 8, 2012. Management Report: Improvements Are Needed in Internal Control over Financial Reporting for the Troubled Asset Relief Program. GAO-12-415R. Washington, D.C.: February 13, 2012. Troubled Asset Relief Program: As Treasury Continues to Exit Programs, Opportunities to Enhance Communication on Costs Exist. GAO-12-229. Washington, D.C.: January 9, 2012. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2011 and 2010 Financial Statements. GAO-12-169. Washington, D.C.: November 10, 2011. Troubled Asset Relief Program: Status of GAO Recommendations to Treasury. GAO-11-906R. Washington, D.C.: September 16, 2011. Troubled Asset Relief Program: The Government’s Exposure to AIG Following the Company’s Recapitalization. GAO-11-716. Washington, D.C.: July 28, 2011. Troubled Asset Relief Program: Results of Housing Counselors Survey on Borrowers’ Experiences with the Home Affordable Modification Program. GAO-11-367R. Washington, D.C.: May 26, 2011. Troubled Asset Relief Program: Survey of Housing Counselors about the Home Affordable Modification Program, an E-supplement to GAO-11-367R. GAO-11-368SP. Washington, D.C.: May 26, 2011. TARP: Treasury’s Exit from GM and Chrysler Highlights Competing Goals, and Results of Support to Auto Communities Are Unclear. GAO-11-471. Washington, D.C.: May 10, 2011. Management Report: Improvements Are Needed in Internal Control Over Financial Reporting for the Troubled Asset Relief Program. GAO-11-434R. Washington, D.C.: April 18, 2011. Troubled Asset Relief Program: Status of Programs and Implementation of GAO Recommendations. GAO-11-476T. Washington, D.C.: March 17, 2011. Troubled Asset Relief Program: Treasury Continues to Face Implementation Challenges and Data Weaknesses in Its Making Home Affordable Program. GAO-11-288. Washington, D.C.: March 17, 2011. Troubled Asset Relief Program: Actions Needed by Treasury to Address Challenges in Implementing Making Home Affordable Programs. GAO-11-338T. Washington, D.C.: March 2, 2011. Troubled Asset Relief Program: Third Quarter 2010 Update of Government Assistance Provided to AIG and Description of Recent Execution of Recapitalization Plan. GAO-11-46. Washington, D.C.: January 20, 2011. Troubled Asset Relief Program: Status of Programs and Implementation of GAO Recommendations. GAO-11-74. Washington, D.C.: January 12, 2011. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2010 and 2009 Financial Statements. GAO-11-174. Washington, D.C.: November 15, 2010. Troubled Asset Relief Program: Opportunities Exist to Apply Lessons Learned from the Capital Purchase Program to Similarly Designed Programs and to Improve the Repayment Process. GAO-11-47. Washington, D.C.: October 4, 2010. Troubled Asset Relief Program: Bank Stress Test Offers Lessons as Regulators Take Further Actions to Strengthen Supervisory Oversight. GAO-10-861. Washington, D.C.: September 29, 2010. Financial Assistance: Ongoing Challenges and Guiding Principles Related to Government Assistance for Private Sector Companies. GAO-10-719. Washington, D.C.: August 3, 2010. Troubled Asset Relief Program: Continued Attention Needed to Ensure the Transparency and Accountability of Ongoing Programs. GAO-10-933T. Washington, D.C.: July 21, 2010. Management Report: Improvements are Needed in Internal Control Over Financial Reporting for the Troubled Asset Relief Program. GAO-10-743R. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Treasury’s Framework for Deciding to Extend TARP Was Sufficient, but Could be Strengthened for Future Decisions. GAO-10-531. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Programs. GAO-10-634. Washington, D.C.: June 24, 2010. Debt Management: Treasury Was Able to Fund Economic Stabilization and Recovery Expenditures in a Short Period of Time, but Debt Management Challenges Remain. GAO-10-498. Washington, D.C.: May 18, 2010. Troubled Asset Relief Program: Update of Government Assistance Provided to AIG. GAO-10-475. Washington, D.C.: April 27, 2010. Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future. GAO-10-492. Washington, D.C.: April 6, 2010. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T. Washington, D.C.: March 25, 2010. Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility. GAO-10-25. Washington, D.C.: February 5, 2010. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C.: December 16, 2009. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Year 2009 Financial Statements. GAO-10-301. Washington, D.C.: December 9, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Troubled Asset Relief Program: One Year Later, Actions Are Needed to Address Remaining Transparency and Accountability Challenges. GAO-10-16. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through September 25, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of September 18, 2009. GAO-10-24SP. Washington, D.C.: October 8, 2009. Debt Management: Treasury Inflation Protected Securities Should Play a Heightened Role in Addressing Debt Management Challenges. GAO-09-932. Washington, D.C.: September 29, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-920T. Washington, D.C.: July 22, 2009. Troubled Asset Relief Program: Status of Participants’ Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through May 29, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of June 1, 2009. GAO-09- 707SP. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008.
The Emergency Economic Stabilization Act of 2008 (EESA) authorized Treasury to create TARP, designed to restore liquidity and stability to the financial system and to preserve homeownership by assisting borrowers struggling to make their mortgage payments. Congress reduced the initial authorized amount of $700 billion to $475 billion as part of the Dodd-Frank Wall Street Reform and Consumer Protection Act. EESA also required that GAO report every 60 days on TARP activities in the financial and mortgage sectors. This report provides an update on the condition of all TARP programs—nonhousing and housing—as of September 30, 2014. To conduct this work, GAO analyzed audited financial data for various TARP programs; reviewed documentation such as press releases, and agency reports on TARP programs; and interviewed Treasury officials. GAO provided a draft of this report to Treasury. Treasury generally concurred with GAO's findings and provided technical comments, which GAO has incorporated, as appropriate. GAO makes no recommendations in this report. The Department of the Treasury (Treasury) continues to wind down Troubled Asset Relief Program (TARP) nonhousing programs that were designed to support financial and automotive markets (see figure). As of September 30, 2014, Treasury had exited four of the nine nonhousing programs that were once active, and was managing assets totaling $2.9 billion under those remaining. Some programs have yielded returns that exceed the original investment. For example, as of September 30, 2014, repayments and income from participants in the Capital Purchase Program, which provided capital to over 700 financial institutions, had exceeded original investments. In contrast, as of the same date Treasury had recouped 86 percent of its expenditures and incurred an estimated lifetime cost of $12.2 billion for the Automotive Industry Finance Program, which invested in major domestic automakers to prevent a significant industry disruption. Treasury's decision to fully exit a program depends on various factors, including the participating institutions' health and market conditions. TARP-funded housing programs, which focus on preventing avoidable foreclosures, are ongoing. As of September 30, 2014, Treasury had disbursed $13.7 billion (36 percent) of the $38.5 billion in TARP housing funds (see figure). The number of new Home Affordable Modification Program (HAMP) permanent modifications added on a monthly basis rose in early 2013 but fell in 2014 to the lowest level since the program's inception. According to Treasury, this decline is attributable in part to the shrinking pool of eligible mortgages, as evidenced in the declining number of 60-day-plus delinquencies reported by the industry. Treasury has taken steps to help more borrowers, including by extending the deadline for program applications for a third time until at least 2016. Also, Treasury launched a new series of public service advertisements that were distributed through a donated media campaign.
To determine the primary reasons for the growth in reported CMP debt at CMS and whether CMS’ CMP receivables have similar financial accountability and reporting issues as its non-CMP receivables, we obtained and reviewed CMS’ audited financial statements, HHS’ accountability reports, and other financial reports that relate to CMS’ CMP and non-CMP collection activities. We also analyzed CMS’ reported CMP receivables and related accounts and information for fiscal years 1997 through 2000 and compared CMS’ CMP accounting records to detailed subsidiary tracking records. We did not independently verify the completeness or accuracy of the subsidiary system data or test information security controls over the systems used to compile these data. We interviewed officials in CMS, HHS OIG, and Department of Justice’s (DOJ) Executive Office for U.S. Attorneys (EOUSA) to obtain explanations for identified significant trends, similarities with non-CMP receivables, material internal control weaknesses, findings and exceptions, as well as unsupported/unreconciled amounts. To determine whether adequate processes exist to collect CMP debt, we obtained an understanding of CMS’ CMP debt collection policies and procedures that relate to CMS’ long-term care, HHS OIG, and DOJ cases, as well as applicable federal laws and regulations. Because CMS could not provide complete and reliable CMP information, a random sample was not selected from CMP receivables as of September 30, 2000, and CMP receivables cases closed in fiscal years 1999 and 2000. However, as agreed with your staff, we performed limited tests of CMS’ debt collection policies and procedures. Specifically, we selected and reviewed all delinquent CMP debts (over 60 days delinquent per CMS records) with a recorded receivables balance as of September 30, 2000, greater than $2 million (12 debts). This represented 57 percent of the delinquent CMP debt balance and 27 percent of the total CMP debt balance per CMS records. We interviewed DOJ’s EOUSA officials to obtain explanations for identified findings and exceptions. We also analyzed long-term care CMP assessment and settlement data for fiscal years 1999 and 2000 for all cases in which the settlements were reached at three selected CMS regional offices. According to CMS’ Civil Monetary Penalty Tracking System, the long-term care cases opened at these regional offices represented approximately 76 percent of all long- term care CMP cases opened during this 2-year period. For identified findings and exceptions, we developed and submitted questions to CMS’ regional offices. We obtained and analyzed the regional offices’ written responses to our questions. To determine what roles, if any, OMB and Treasury play in overseeing and monitoring the government’s collection of civil debt, we interviewed OMB and Treasury officials. We performed our review in Washington, DC and Atlanta, GA from March 2001 through August 2001 in accordance with U.S. generally accepted government auditing standards. Prior to our December 14, 2001, briefing to your office on the results of our work, we provided CMS, HHS OIG, DOJ’s EOUSA, Treasury, and OMB with a draft of our detailed briefing slides, which contained recommendations to the Administrator of CMS, for review and comment. The comments received are discussed in the “Agency Comments and Our Evaluation” section of this report and on the “Agency Comments” slide in Appendix I or incorporated into the report as applicable. CMS’ letter is reprinted in appendix II. As of September 30, 2000, HHS reported that CMS’ CMP receivables totaled about $260 million. CMP debt results from deficiencies at long-term care nursing facilities or fraud and abuse and is collected by three separate groups. CMS’ regional offices are responsible for the long-term care debt, and HHS OIG and DOJ are responsible for fraud and abuse debt. DOJ fraud debt accounted for about 88 percent of the reported $260 million receivables balance as of September 30, 2000, while OIG fraud and abuse debt accounted for approximately 11 percent and CMS’ long-term care debt accounted for about 1 percent of the reported balance. For the long-term care debt, Sections 1819 (42 U.S.C. Sections 1395i-3) and 1919 (42 U.S.C. Section 1396r) of the Social Security Act establish requirements for surveying nursing facilities to determine whether they meet the requirements for participation in the Medicare and Medicaid programs. A survey must be conducted at each nursing facility within 15 months of the previous survey by a state survey agency. In addition, the statewide average interval between surveys must be 12 months or less. Remedies, of which CMP is one, may be used when a nursing facility is not in substantial compliance with the requirements for participation in the Medicare and Medicaid programs. A CMP is imposed either for the number of days ($50 to $10,000 per day) or for each instance ($1,000 to $10,000 per instance) that a nursing facility is not in substantial compliance with the participation requirements. The amount depends on the severity of the deficiency. A written notice of the CMP is sent to the nursing facility. The facility has 60 days from the date of the notice to either waive its right to an administrative hearing and receive automatically a reduction of 35-percent in the CMP amount or request an administrative hearing. At any time prior to an administrative hearing, the nursing facility may enter into a settlement of the CMP amount. Once there is an administrative hearing decision or a settlement, the final CMP receivable amount is determined. According to CMS’ State Operations Manual, if a decision is made to settle, the settlement should not be for a better term than had the nursing facility opted for a 35-percent reduction. To track assessments and collections, CMS’ regional offices use the Civil Monetary Penalty Tracking System for fiscal year 1999 and later CMP cases and spreadsheets for fiscal year 1996 through fiscal year 1998 CMP cases. In addition, CMS’ regional offices use the long-term care system to track CMP cases. For civil health care fraud matters, DOJ generally uses the False Claims Act, as well as common law fraud remedies, payment by mistake, unjust enrichment, and conversion to recover amounts from those who have submitted false or improper claims to the United States. Civil health care fraud matters are referred directly from federal or state investigative agencies, or result from filings by private persons known as “relators,” who file suits on behalf of the federal government under the 1986 qui tam amendments to the False Claims Act. The False Claims Act (31 U.S.C. Sections 3729-3733) provides that anyone who “knowingly” submits a false claim to the government is liable for a penalty from $5,000 to $10,000 plus up to three times the amount of damages sustained by the government. A court judgment or settlement establishes amounts due by violators. DOJ prepares a Health Care Fraud Tracking Form, which is submitted to HHS OIG and CMS’ Office of Financial Management, and establishes the debt in a tracking system. If the health care violator does not pay the fraud debt, DOJ’s U.S. Attorney Offices (USAO) have several options to pursue collection, including contacting the debtor, securing or executing upon a judgment, filing liens or garnishments, and referring the delinquent debt to Treasury. To track assessments and collections of civil health care fraud cases, DOJ’s USAOs use either the Tracking Assistance for the Legal Office Network or the Collection Litigation Automated Support System, and DOJ’s Civil Division uses the Debt Collection System. HHS OIG also pursues fraud and abuse cases. According to HHS OIG data, since 1988, about 90 percent of its CMP assessments relate to the requirements of Section 1867 of the Social Security Act (42 U.S.C. Section 1395dd). This statute specifies that a hospital’s emergency department must provide an appropriate medical screening examination within the capability of the hospital’s emergency department to any individual who comes to the department with a request for examination or treatment of a medical condition. In addition, if the hospital determines that the individual has an emergency medical condition, the hospital must either stabilize the medical condition or transfer the individual to another medical facility. This statute provides for a maximum penalty of $50,000 per violation. According to HHS OIG data, since 1988, approximately 10 percent of its CMP assessments relate to violations of the statutory provisions applicable to false or fraudulent claims submitted to federal health care programs in Section 1128A of the Social Security Act (42 U.S.C. Sections 1320a-7a). This law provides that for false, fraudulent, or otherwise improper claims, HHS may impose a penalty of not more than $10,000 for each item or service and an assessment of no more than triple the amount claimed for each item or service in lieu of damages. HHS OIG uses spreadsheets to track assessments and collections of CMP cases. CMS’ Office of Financial Management is responsible for the accounting and reporting of CMP receivables in the general ledger using the Financial Accounting Control System. This office is also responsible for determining the allowance for uncollectible receivables. According to CMS, an allowance is calculated as the amount of CMP debt delinquent for 60 days or longer that is considered to be inactive and truly delinquent based on a case-by-case review of each receivable. HHS OIG’s Office of Audit Services stated that due to the immateriality of the CMP receivables balances in relation to CMS’ total accounts receivable balance, CMS’ external financial statement auditors have not performed any detailed audit work on CMP receivables. However, these auditors have identified various reporting, internal control, and accountability issues related to Medicare (non-CMP) receivables. These issues resulted in a qualified opinion on CMS’ financial statements for fiscal year 1998 and a material weakness on non-CMP receivables during fiscal years 1998 through 2000. The external financial statement auditors reported that CMS’ lack of an integrated financial management system continues to impair its ability to adequately support the reported non-CMP receivables activity and balances. The external financial statement auditors also identified deficiencies in the non-CMP receivables activity, including incorrectly reported activity by non-CMP contractors and inability of non-CMP contractors to reconcile reported ending balances to the contractors’ subsidiary records. The external financial statement auditors’ recommendations included establishing an integrated financial management system for use by non- CMP contractors and CMS’ central and regional offices and ensuring that all non-CMP contractors develop and implement control procedures to provide independent checks of the validity, accuracy, and completeness of the amounts reported to CMS, including a reconciliation with the contractors’ supporting documentation, and periodic review of contractors’ control procedures over reconciliations. The primary reason for the growth of CMS’ CMP receivables was the expansion of fraud and abuse detection activities from fiscal years 1995 through 1997 that significantly increased fraud and abuse debts in fiscal year 1997. This is supported by CMS’ accounting records, which revealed that about $255 million of the $260 million CMP receivables balance as of September 30, 2000, related to fraud and abuse debts. For the $255 million in receivables, about $172 million remained outstanding from fiscal year 1997. In 1995, under authority to use trust fund money to develop or demonstrate improved methods for investigating and prosecuting fraud, HHS launched Project Operation Restore Trust. The project targeted fraud and abuse in three high-growth areas of the health care industry: home health agencies, nursing homes, and durable medical equipment suppliers. In addition, the passage of the Health Insurance Portability and Accountability Act (HIPAA) in 1996 expanded funding for HHS’ fraud and abuse detection activities by establishing the Fraud and Abuse Control Program, a program designed to combat fraud and abuse committed in the health plans (both public and private). By January 1, 1997, HHS OIG and DOJ had jointly implemented the Fraud and Abuse Control Program, as required by HIPAA. HHS reported, in fiscal year 1996, that Project Operation Restore Trust combined with the upgraded funding provided by HIPAA would enable HHS to more aggressively detect and prevent fraud, waste, and abuse. In addition, DOJ’s EOUSA stated that DOJ’s health care fraud activities were expanded in fiscal years 1996 and 1997, and with the implementation of the Health Care Fraud Tracking Forms in December 1996, DOJ began submitting health care fraud debts to CMS during fiscal year 1997. Prior to this time, only fraud and abuse debts submitted by HHS OIG were recorded and reported by CMS. As discussed above, no detailed audit work on CMP receivables has been performed by CMS’ external financial statement auditors due to the small balance of CMS’ CMP receivables in relation to CMS’ total accounts receivables, which consist primarily of non-CMP Medicare receivables. For example, as of September 30, 2000, CMS’ CMP receivables were reported to be about $260 million, or 3 percent, of total reported accounts receivables of approximately $8.1 billion, of which non-CMP Medicare receivables totaled more than $7.7 billion. Our analysis of CMS’ CMP receivables data revealed similar financial accountability and reporting issues as those identified for non-CMP receivables by CMS’ external financial statement auditors. We found that CMS does not have formal written policies and procedures for the reconciliation of CMP receivables, recording CMP receivables in the general ledger, and determining the allowance for uncollectible accounts related to CMP receivables. As a result, we found (1) unreconciled differences between CMP receivables amounts on HHS’ accountability reports and CMS’ audited financial statements, (2) unreconciled differences between CMP receivables amounts in CMS’ general ledger and the detailed subsidiary systems, (3) incorrect recording—not removing debts paid in full and misclassifications between delinquent and current— of CMP receivables in the general ledger, and (4) lack of an adequate collectibility analysis for uncollectible accounts relating to CMP receivables. CMS does not have policies and procedures requiring it to compare CMP receivables reported in its audited financial statements and HHS’ accountability report. According to HHS, CMS is the only HHS component that has CMPs. Therefore, the CMP receivable amounts reported in HHS’ accountability report and CMS’ audited financial statements should be the same. However, our work identified that year- end CMP receivables balances for fiscal year 1997 through fiscal year 1999, differed by tens of millions of dollars between HHS’ accountability report and CMS’ audited financial statements. For example, CMS’ fiscal year 1997 financial statements reported a CMP receivables balance of about $243 million; however, HHS’ accountability report for fiscal year 1997 reported approximately $191 million—a difference of about $52 million. The beginning balance for CMP receivables in HHS’ fiscal year 2000 draft accountability report was adjusted by approximately $50 million to agree with CMS’ accounting records. As a result of the adjustment, the fiscal year 2000 beginning balance in HHS’ fiscal year 2000 accountability report differed from the ending balance in its fiscal year 1999 accountability report. After we brought this difference to HHS’ attention, a statement identifying the difference was added to HHS’ fiscal year 2000 accountability report’s overview; however, the statement did not explain the cause of the difference. Similar to the accountability and reporting issues reported for non-CMP receivables by CMS’ external financial statement auditors, CMS also does not have policies or procedures for reconciling CMP receivables balances in the general ledger to detailed support maintained in the subsidiary systems. As discussed above, three separate groups (CMS’ long-term care, HHS OIG, and DOJ) collect CMP debt. Each group maintains at least one subsidiary system to track its CMP cases. As of September 30, 2000, the CMP receivables balance in the general ledger and the detailed subsidiary systems differed by a net of about $22 million, with the difference for each group ranging from what appears to be an understatement of about $35 million to a possible overstatement of about $29 million. The difference between the general ledger and the subsidiary systems for the long-term care CMP debt totaled about $17 million. The primary reason for the long-term care difference is that, beginning in fiscal year 1999, all new long-term care CMP receivables are no longer recorded in the general ledger until a collection is made. This practice is not in accordance with SFFAS No. 1, Accounting for Selected Assets and Liabilities, and SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting. These statements require a receivable to be recognized once amounts that are due to the federal government are assessed, net of an allowance for uncollectible amounts. CMS stated that the general ledger does not include all potentially valid long-term care CMP receivables because of the unreliability of the long-term care CMP accounts receivable amounts in the Civil Monetary Penalty Tracking System. CMS stated that the long-term care CMP receivables would be reviewed for validity and recorded in the general ledger as part of the planned upgrade of the subsidiary system. In addition, according to CMS’ accounting records, of the 12 selected delinquent debts we reviewed, 2 totaling about $24 million were the responsibility of HHS OIG. However, upon further research by DOJ, these debts were actually the responsibility of DOJ and were included in DOJ’s subsidiary system on September 30, 2000, as uncollected. This misclassification appears to explain a portion of the difference between the general ledger and HHS OIG’s and DOJ’s subsidiary systems. In addition to CMS’ lack of policies and procedures relating to the reconciliation of CMP information, CMS stated that Division of Accounting staff responsible for the recording of information in the general ledger use notes and knowledge gained during training provided by staff previously responsible for the duties to record CMP receivables in the general ledger. However, these informal policies and procedures do not (1) contain specific guidance on recording due dates for payments being made through payment plans or recording collections against established receivables and (2) address control procedures to ensure the accurate recording of CMP receivables in the general ledger, such as review and approval of transactions by a supervisor. Our testing identified instances, in addition to the above misclassification between HHS OIG and DOJ receivables, in which CMP receivables were recorded in the general ledger incorrectly. For the 12 selected delinquent debts with receivable balances totaling about $70 million, 7 debts totaling approximately $32 million were recorded incorrectly in the general ledger. For four of the debts totaling about $23 million, CMS failed to remove the debts from CMP receivables even though collections of these debts were received prior to September 30, 2000. In addition, CMS incorrectly classified three of the debts totaling about $9 million as delinquent, instead of current, even though collections were being received in accordance with the due dates of the respective payment plans. Further, for the 12 selected delinquent debts, documentation supporting one of the debts with a balance of about $3 million could not be located by DOJ. The status of receivables— current, delinquent, or paid—should be properly noted since it affects the accuracy of the allowance for uncollectible accounts, which is netted against gross CMP receivables reported on HHS’ and CMS’ financial statements. In addition, these errors could possibly have been avoided if there was appropriate review and approval of such transactions. Lastly, CMS does not have formal written policies and procedures for determining its allowance for uncollectible accounts. As previously noted, CMS stated that its allowance for uncollectible accounts represents the balance of all CMP debt delinquent for 60 days or longer that is considered to be inactive and truly delinquent based on a case-by- case review of each receivable. According to SFFAS No. 1, losses due to uncollectible amounts should be measured through a systematic methodology with an analysis of both individual accounts and a group of accounts as a whole. Individual account analysis should be based on factors such as the debtor’s ability to pay and the probable recovery of amounts from secondary sources. Group analysis should be performed using a method such as statistical estimation by modeling or sampling and should take into consideration such factors as historical loss experience and recent economic events. However, CMS’ allowance for uncollectible accounts is not based on a systematic analysis of the collectibility of the outstanding receivables balance. CMP debt collection policies and procedures have been established by CMS for the long-term care debt and by HHS OIG and DOJ for the fraud and abuse debt. However, incomplete and unreliable CMP information limited us from determining the overall adequacy of the CMP debt collection policies and procedures. As a result and as agreed with your staff, we performed limited tests of CMS’ debt collection policies and procedures. We found that debt collection policies and procedures were followed for 11 of the 12 selected delinquent debts. We could not determine whether DOJ followed its debt collection policies and procedures for the remaining selected debt because DOJ was unable to locate supporting documentation. In analyzing long-term care CMP cases and settlement data for fiscal years 1999 and 2000, we noted one debt collection matter in which debt collection policies and procedures can be strengthened. The matter relates to CMS often settling at amounts that exceeded the 35-percent discount threshold established by management. At the three selected regional offices, we found that CMS reduced the assessed long-term care CMP amounts more than 35-percent for 89 out of 215 cases (41 percent), or about $8.4 million out of about $11.4 million (73 percent) in assessments settled in fiscal years 1999 and 2000. For the 89 cases, a 35-percent discount on approximately $8.4 million in assessments results in possible collections of about $5.5 million. However, CMS actually discounted these cases in total by about 69 percent, reducing potential collections by about $2.9 million. According to CMS officials, other matters can develop while a hearing is pending that can affect the settlement amount, such as unavailability of witnesses and new information related to the deficiencies. In these cases, according to the officials, it may be in CMS’ best interests to settle for less, given the cost of litigation and the risk of not collecting anything. However, CMS does not have debt collection policies and procedures for instances in which a discount greater than 35-percent is allowed. Of the three selected regional offices, one regional office was maintaining documentation to support that such settlements were warranted, while two regional offices were not maintaining documentation. Consistent with good management practices and the Standards for Internal Control in the Federal Government, when exceptions to a stated management policy occur, typical control practices are to document, review, and approve such exceptions to ensure that management’s objectives are being met. OMB and Treasury are provided with the information useful in performing their debt oversight roles through the agencies’ reporting of CMP receivables and referral of CMP debt to Treasury for collection. With respect to the reporting of CMP receivables, beginning with fiscal year 1997, CMS and HHS have annually disclosed CMP receivables information in their financial reports. In accordance with requirements of DCIA and Treasury guidance, CMS reports receivables information quarterly, which includes CMP, to Treasury in the Report on Receivables Due from the Public. However, in discussions with OMB officials, they emphasized that OMB’s oversight is broad and consists of monitoring and evaluating governmentwide credit management, debt collection activities, and federal agency performance. OMB also stated that it is the specific responsibility of the agency Chief Financial Officer and program managers to manage and be accountable for the debt collection of their agency’s credit portfolios in accordance with applicable federal debt statutes, regulations, and guidance. OMB further added that it is the role of each agency to specifically monitor and collect its civil penalty debt regardless of dollar magnitude and the responsibility of each agency’s OIG to provide oversight through audit of the agency’s debt collection activities. Regarding referral of CMP debt to Treasury, Treasury stated that it relies on the agencies to determine what debt should be referred to Treasury for collection as required by DCIA. DCIA requires federal agencies to transfer eligible nontax debt or claims over 180 days delinquent to Treasury for collection actions. DOJ stated that referral to Treasury was one type of debt collection tool used by USAOs when pursuing collection of fraud cases. However, CMS is not referring long-term care CMP debt to Treasury for collection actions. CMS stated that it plans to refer eligible long-term care CMP debts to Treasury in the future and is currently researching the issue. The expanded fraud and abuse detection activities and resulting growth in fraud and abuse debt is the primary reason for the increase in CMP receivables over the last several years. In addition, our work found similar financial accountability and reporting issues as those reported for non- CMP receivables and that a CMP debt collection policy and procedure can be strengthened. As long as CMP receivables continue to be considered immaterial in the judgment of CMS’ external financial statement auditors, minimal audit coverage will be provided in this area. Therefore, CMS management needs to take steps to improve the accounting and reporting of CMP receivables. In order to improve CMS’ accounting, reporting, and collection of CMP receivables, we recommend that the Administrator of CMS establish and implement formal written accounting and reporting policies and procedures for comparing CMP receivables reported in CMS’ audited financial statements and HHS’ accountability report, reconciling CMP receivables between CMS’ general ledger and the detailed subsidiary systems, recording of long-term care receivables in the general ledger since long- term care CMP receivables currently are not recorded in the general ledger until a collection is made, and ensuring the accurate recording of information into the general ledger. We also recommend that the Administrator of CMS determine an approach for assessing the collectibility of outstanding amounts so that a meaningful allowance for uncollectible accounts can be reported and used for measuring debt collection performance and establish formal written policies and procedures to ensure that the allowance for uncollectible CMP debts is properly determined using such an approach. We further recommend that the Administrator of CMS establish and implement formal written debt collection policies and procedures for handling instances in which a discount greater than 35-percent is allowed, including the documentation, review, and approval of such settlements and referring eligible long-term care CMP debt to Treasury as required by DCIA. A draft of the briefing slides was provided to CMS, HHS OIG, DOJ’s EOUSA, OMB, and Treasury for their review and comment. CMS’ letter is reprinted in appendix II. CMS, HHS OIG, DOJ’s EOUSA, OMB, and Treasury also provided us with technical comments that we considered and addressed, as appropriate. The following discussion addresses these agencies’ comments and our evaluation. CMS agreed with all but one of our recommendations. CMS did not agree with our recommendation to establish and implement debt collection policies and procedures for instances in which a discount greater than 35- percent is allowed. According to CMS, flexibility is needed in the settlement process and issuing policies and procedures on settlements would add rigidity to the process. It was not our intent that a rigid process for determining settlement amounts would be implemented. However, consistent with good management practices and the Standards for Internal Control in the Federal Government, when exceptions to a stated management policy occur, typical control practices are to document, review, and approve such exceptions to ensure that management’s objectives are being met. CMS also stated that the non-CMP issues reported by CMS’ external financial statement auditors have no correlation to the CMP issues discussed in the report. We disagree. Even though these are two different types of debt, the underlying financial accountability and reporting issues are similar. For example, as discussed earlier, the external financial statement auditors reported that non-CMP contractors are unable to reconcile reported ending balances to the contractors’ subsidiary records. Our review also found reconciliation problems with the CMP receivables. As discussed in this report, as of September 30, 2000, the CMP receivables balance in the general ledger and the detailed subsidiary systems differed by a net of about $22 million. We are sending copies of this report to the Chairman of the Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs, as well as the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs. We will also provide copies to the Secretary of Health and Human Services; the Administrator, Centers for Medicare and Medicaid Services; the Attorney General; the Inspector General of the Department of Health and Human Services; the Secretary of the Treasury; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions about this report, please contact me at (202) 512- 3406 or Steven Haughton, Assistant Director, at (202) 512-5999. Additional contributors to this assignment were Dawn Simpson, Suzanne Murphy, Rathi Bose, and Marshall Hamlett. Formerly the Health Care Financing Administration (HCFA). whether adequate processes exist to collect CMP debt, and what roles, if any, the Office of Management and Budget (OMB) or the Department of the Treasury play in overseeing and monitoring CMS’ collection of CMP debt. Increases in fraud and abuse debt was the primary reason for the reported growth in CMP debt. CMS’ CMP receivables have similar financial accountability and reporting issues as those identified for non-CMP receivables by its external financial statement auditors. Incomplete and unreliable CMP data limited the determination of overall adequacy of CMP debt collection policies and procedures. Instead, as agreed with your staff, we performed limited tests of CMS’ debt collection policies and procedures and found that one policy and procedure, relating to settling at amounts that exceed a 35-percent discount threshold, can be strengthened. OMB and Treasury are provided with information useful in performing CMP debt oversight roles. However, OMB stated that it has broad oversight responsibility in monitoring and evaluating governmentwide debt collection activities. OMB further stated that it is the specific responsibility of the agency to monitor, manage, and collect CMP debt and the responsibility of the agency’s Office of Inspector General (OIG) to provide oversight through audit of the agency’s debt collection activities. In addition, Treasury stated that it relies on the agencies to determine what debt should be referred to Treasury for collection action, as required by the Debt Collection Improvement Act of 1996 (DCIA). However, not all eligible CMP debts are currently being referred. Services (HHS) reported that CMS’ CMP receivables totaled about $260 million. CMP debt results from deficiencies at long-term care nursing facilities (LTC) or fraud and abuse and is collected by three separate groups. CMS is responsible for the LTC debt and HHS’ Office of Inspector General (OIG) and the Department of Justice (DOJ) are responsible for fraud and abuse debt. CMP receivables relate to two categories of violations: (1) LTC and (2) Fraud and Abuse. Sections 1819 (42 U.S.C. Sections 1395i-3) and 1919 (42 U.S.C. Section 1396r) of the Social Security Act require standard surveys of nursing facilities to determine whether they meet the requirements for participation in the Medicare and Medicaid programs. A survey must be conducted at each nursing facility within 15 months of the previous survey by a state survey agency. In addition, the statewide average interval between surveys must be 12 months or less. Remedies, of which CMP is one, may be used when a nursing facility is not in substantial compliance with the requirements for participation in the Medicare and Medicaid programs. CMP is imposed either for the number of days ($50 to $10,000 per day) or for each instance ($1,000 to $10,000 per instance) that a nursing facility is not in substantial compliance with the participation requirements. The amount depends on the severity of the deficiency. A written notice of the CMP is sent to the nursing facility. The facility has 60 days from the date of the notice to either waive its right to an administrative hearing and automatically receive a reduction of 35-percent in the CMP amount or request an administrative hearing. At any time prior to an administrative hearing, the nursing facility may enter into a settlement of the CMP amount. Once there is an administrative hearing decision or a settlement, the final CMP receivable amount is determined. According to CMS’ State Operations Manual, if a decision is made to settle, the settlement should not be for a better term than had the nursing facility opted for a 35-percent reduction. To track assessments and collections, CMS’ regional offices use the Civil Monetary Penalty Tracking System (CMPTS) for FY 1999 and later CMP cases and spreadsheets for FY 1996 through FY 1998 CMP cases.2 In addition, CMS’ regional offices use the LTC system to track CMP cases. Regulations implementing the imposition of LTC CMP were effective July 1, 1995. The first LTC CMP assessment was made at the beginning of FY 1996. Background (cont’d) For civil health care fraud matters, DOJ generally uses the False Claims Act, as well as common law fraud remedies, payment by mistake, unjust enrichment, and conversion to recover amounts from those who have submitted false or improper claims to the United States. Civil health care fraud matters are referred directly from federal or state investigative agencies, or result from filings by private persons known as “relators,” who file suits on behalf of the federal government under the 1986 qui tam amendments to the False Claims Act. The False Claims Act (31 U.S.C. Sections 3729-3733) provides that anyone who “knowingly” submits a false claim to the government is liable for a penalty from $5,000 to $10,000 plus up to three times the amount of damages sustained by the government. A court judgment or settlement establishes amounts due by violators. DOJ prepares a Health Care Fraud Tracking Form,3 which is submitted to HHS OIG and CMS’ Office of Financial Management, and establishes the debt in a tracking system. In documenting a judgment or settlement, DOJ uses this form to note the judgment or settlement amount and the recipients to be paid from the collected debt. If the health care violator does not pay the fraud debt, DOJ’s U.S. Attorneys’ Offices (USAO) have several options to pursue collection, such as contacting the debtor, securing or executing upon a judgment, filing liens or garnishments, and referring the delinquent debt to Treasury. DOJ uses one of the following systems to track assessments and collections of civil health care fraud cases. USAOs use either the Tracking Assistance for the Legal Office Network or the Collection Litigation Automated Support System. The Civil Division uses the Debt Collection System. According to HHS OIG data, since 1988, about 90 percent of its CMP assessments relate to the requirements of Section 1867 of the Social Security Act (42 U.S.C. Section 1395dd). This statute specifies that a hospital’s emergency department must provide an appropriate medical screening examination within the capability of the hospital’s emergency department to any individual who comes to the department with a request for examination or treatment of a medical condition. In addition, if the hospital determines that the individual has an emergency medical condition, the hospital must either stabilize the medical condition or transfer the individual to another medical facility. This statute provides for a maximum penalty of $50,000 per violation. According to HHS OIG data, since 1988, approximately 10 percent of its CMP assessments relate to violations of the statutory provisions applicable to false or fraudulent claims submitted to federal health care programs in Section 1128A of the Social Security Act (42 U.S.C. Sections 1320a-7a). This law provides that for false, fraudulent, or otherwise improper claims, HHS may impose a penalty of not more than $10,000 for each item or service and an assessment of no more than triple the amount claimed for each item or service in lieu of damages. HHS OIG uses spreadsheets to track assessments and CMS’ Office of Financial Management is responsible for the accounting and reporting of CMP receivables in the general ledger using the Financial Accounting Control System (FACS). This office is also responsible for determining the allowance for uncollectible receivables. According to CMS, an allowance is calculated as the amount of CMP debt delinquent for 60 days or longer that is considered to be inactive and truly delinquent based on a case-by-case review of each receivable. According to HHS OIG, collections being received through payment plans for CMP assessments under Section 1128A of the Social Security Act are sent directly to CMS. HHS OIG does not track collections for these cases. of CMP receivables balances, no detailed audit work has been performed on CMP receivables by CMS’ external financial statement auditors. However, these auditors have identified various reporting, internal control, and accountability issues related to Medicare (non- CMP) receivables. These issues resulted in a qualified opinion on CMS’ financial statements for fiscal year 1998 and a material weakness on non- CMP receivables during fiscal years 1998 through 2000.5 The external auditors reported that CMS’ lack of an integrated financial management system continues to impair its ability to adequately support the reported non-CMP receivables activity and balances. CMS’ financial statements and related auditor reports referred to in the slides were issued under CMS’ former name (HCFA). The external auditors also identified deficiencies in non-CMP receivables activity, including the following: incorrectly reported activity by non-CMP contractors and inability of non-CMP contractors to reconcile reported ending balances to the contractors’ subsidiary records. The external auditors’ recommendations included (1) establishing an integrated financial management system for use by non-CMP contractors and CMS’ central and regional offices and (2) ensuring that all non-CMP contractors develop and implement control procedures to provide independent checks of the validity, accuracy, and completeness of the amounts reported to CMS, including a reconciliation with the contractors’ supporting documentation, and periodic review of contractors’ control procedures over reconciliations. Obtained and reviewed CMS’ audited financial statements, HHS’ accountability reports, and other financial reports that relate to CMS’ CMP and non-CMP collection activities. Analyzed CMS’ reported CMP receivables and related accounts and information for fiscal years 1997 through 2000. Compared CMS’ CMP accounting records to detailed subsidiary tracking records. Obtained an understanding of CMS’ CMP debt collection policies and procedures that relate to LTC, HHS OIG, and DOJ cases, as well as applicable federal laws and regulations. Did not select a random sample, due to incomplete and unreliable CMP information, from CMP receivables as of September 30, 2000, and CMP receivable cases closed in fiscal years 1999 and 2000. However, as agreed with your staff, we performed limited tests of CMS’ debt collection policies and procedures, including the following. Selected and reviewed all delinquent CMP debts (over 60 days delinquent) with a recorded receivable balance as of September 30, 2000, greater than $2 million (12 debts), which represented 57 percent of the delinquent CMP debt balance and 27 percent of the total CMP debt balance per CMS’ records.6 Analyzed LTC CMP assessment and settlement data for FY 1999 and 2000 for all cases settled at three selected CMS regional offices. According to CMPTS, the LTC cases opened at these regional offices represented approximately 76 percent of all LTC CMP cases opened during this 2-year period. The 12 selected debts were fraud cases managed by DOJ. Interviewed officials in CMS, HHS OIG, and DOJ’s Executive Office for U.S. Attorneys (EOUSA) to obtain explanations for identified significant trends, similarities with non-CMP receivables material internal control weaknesses, findings and exceptions, as well as unsupported/unreconciled amounts. Interviewed OMB and Treasury officials to determine what roles, if any, OMB and Treasury play in overseeing and monitoring the government’s collection of civil debt. Did not independently verify the completeness or accuracy of the subsidiary system data or test information security controls over the systems used to compile these data. Provided CMS, HHS OIG, DOJ’s EOUSA, OMB, and Treasury with a draft of our detailed briefing slides, which contained recommendations to the Administrator of CMS, for review and comment. The comments received are discussed on the “Agency Comments” slide or incorporated into the slides as applicable. Performed our review in Washington, DC and Atlanta, GA from March 2001 through August 2001 in accordance with U.S. generally accepted government auditing standards. expanded that significantly increased fraud and abuse debts in FY 1997. This was the primary reason for the growth of CMS’ CMP receivables.7 In 1995, under authority to use trust funds money to develop or demonstrate improved methods for investigating and prosecuting fraud, HHS launched Project Operation Restore Trust (ORT). ORT targeted fraud and abuse in three high-growth areas of the health care industry: home health agencies, nursing homes, and durable medical equipment suppliers. DOJ is responsible for collection activity for the majority of the fraud and abuse debt. Accountability Act (HIPAA) expanded funding for HHS’ fraud and abuse detection activities. In FY 1996, HHS reported that ORT combined with the upgraded funding provided by HIPAA would enable HHS to more aggressively detect and prevent fraud, waste, and abuse. With the establishment of HIPAA, HHS OIG and DOJ jointly implemented, by January 1, 1997, the Fraud and Abuse Control Program to combat fraud and abuse committed in the health plans (both public and private). expanded in FY 1996 and FY 1997, and, with the implementation of the Health Care Fraud Tracking Forms in December 1996, DOJ began submitting health care fraud debts to CMS during FY 1997. Prior to this time, only fraud and abuse debts submitted by HHS OIG were recorded and reported by CMS. CMS’ accounting records revealed that about $255 million of the $260 million CMP receivables balance as of September 30, 2000, related to fraud and abuse debts. Of the fraud and abuse debts, about $172 million remained outstanding from FY 1997. accountability and reporting issues as those identified for non-CMP receivables by CMS’ external financial statement auditors. Nonetheless, using HHS’ accountability reports, the following is a summary of CMS’ key CMP financial information for FY 1997 through FY 2000: CMS’ outstanding CMP receivables increased from about $41 million, as of September 30, 1996, to about $260 million, as of September 30, 2000. CMS annually reserved, in an allowance account, from 14 to 29 percent of the outstanding CMP receivables balance for the estimated amounts that it deemed collection was doubtful. There were no write-offs of CMS’ CMP receivables during the 4- year period. CMS does not have policies and procedures requiring it to compare CMP receivables reported in its audited financial statements and HHS’ accountability report. The following differences between such reported balances have been identified. According to HHS, CMS is the only HHS component that has CMP. However, FY 1997 through FY 1999 ending balances reported for CMP receivables differed by tens of millions of dollars between HHS’ accountability reports and CMS’ audited financial statements. FY 2000 beginning balance for CMP receivables in HHS’ draft accountability report was adjusted by approximately $50 million to agree with CMS’ accounting records. As a result of the adjustment, the beginning balance in HHS’ FY 2000 accountability report differed from the ending balance in its FY 1999 accountability report. After we brought this difference to HHS’ attention, a statement identifying the difference was added to HHS’ FY 2000 accountability report’s overview; however, the statement did not explain the cause of the difference. According to HHS OIG’s Office of Audit Services, no detailed audit work on CMP receivables has been performed by CMS’ external financial statement auditors due to the small balance of CMS’ CMP receivables in relation to CMS’ total accounts receivables, which consists primarily of non-CMP Medicare receivables. $260 million, or 3 percent, of total reported accounts receivables of approximately $8.1 billion, of which non-CMP Medicare receivables totaled more than $7.7 billion. Similar to the accountability and reporting issues reported for non-CMP receivables by CMS’ external financial statement auditors, CMS also does not have policies or procedures for reconciling CMP accounts receivables balances to detailed support maintained in the subsidiary systems. Difference ($ 17,422) 28,835 (34,950) 1,437 ($ 22,100) Accounting for Selected Assets and Liabilities, and SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, state that a receivable should be recognized once amounts that are due to the federal government are assessed, net of an allowance for uncollectible amounts. However, CMS stated that beginning in FY 1999, all new LTC CMP receivables are no longer recorded in FACS until a collection is made due to the unreliability of the accounts receivable amounts in CMS’ subsidiary system--CMPTS. CMS stated that the LTC CMP receivables would be reviewed for validity as part of the planned upgrade of CMPTS. As a result, FACS does not include all potentially valid LTC CMP receivables. This appears to be the primary reason for the LTC difference between FACS and the subsidiary system. $24 million were the responsibility of HHS OIG according to CMS’ accounting records. However, upon further research by DOJ, these debts were actually the responsibility of DOJ and were included in DOJ’s subsidiary system on 9/30/00 as uncollected. This misclassification appears to explain a portion of the difference between FACS and HHS OIG’s and DOJ’s subsidiary systems. CMS does not have formal written policies and procedures for determining its allowance for uncollectible accounts. CMS stated that the allowance represents the balance of all CMP debt delinquent for 60 days or longer that is considered to be inactive and truly delinquent based on a case-by-case review of each receivable. be measured through a systematic methodology with an analysis of both individual accounts and a group of accounts as a whole. Individual account analysis should be based on factors such as the debtor’s ability to pay and the probable recovery of amounts from secondary sources. Group analysis should be performed using a method such as statistical estimation by modeling or sampling and should take into consideration such factors as historical loss experience and recent economic events. However, CMS’ allowance for uncollectible accounts is not based on a systematic analysis of the collectibility of the outstanding receivables balance. about $70 million, 7 debts totaling approximately $32 million were recorded incorrectly in FACS. CMS failed to remove four of the debts totaling about $23 million from CMP receivables even though collections of these debts were received prior to September 30, 2000. CMS incorrectly classified three of the debts totaling about $9 million as delinquent, instead of current, even though collections were being received in accordance with the due dates of the respective payment plans. For the 12 selected delinquent debts, documentation supporting one of the debts with a balance of about $3 million could not be located by DOJ. of information in FACS use notes and knowledge gained during training provided by staff previously responsible for the duties to record CMP receivables in the general ledger. However, these informal policies and procedures do not (1) contain specific guidance on recording due dates for payments being made through payment plans or recording collections against established receivables and (2) address control procedures to ensure the accurate recording of CMP receivables in the general ledger, such as review and approval of transactions by a supervisor. The status of receivables—current, delinquent, or paid—should be properly noted since it affects the accuracy of the allowance for uncollectible accounts, which is netted against gross CMP receivables reported on HHS’ and CMS’ financial statements. In addition, these errors could possibly have been avoided if there was appropriate review and approval of such transactions. CMS (LTC, HHS OIG, and DOJ) has established debt collection policies and procedures. However, incomplete and unreliable CMP information limited us from determining the overall adequacy of the CMP debt collection policies and procedures. As a result and as agreed with your staff, we performed limited tests of CMS’ debt collection policies and procedures and found the following. For 11 of the 12 selected delinquent debts, DOJ followed its debt collection policies and procedures. For four cases, DOJ had followed its procedures and had collected the debts in full. For three cases, DOJ was following its procedures for collecting the debts in accordance with the respective payment plans. Four cases remained delinquent, but DOJ was following its procedures for pursuing collection of unpaid debt. We were unable to determine whether DOJ followed its debt collection policies and procedures for one case since DOJ was unable to locate supporting documentation. For LTC CMP debt collection policies and procedures, we noted one debt collection matter in which such policies and procedures can be strengthened. The matter relates to CMS often settling at amounts that exceeded the 35-percent discount threshold. At the three selected regional offices, CMS reduced the assessed LTC CMP amounts more than 35-percent for 89 out of 215 cases (41 percent), or about $8.4 million out of about $11.4 million (73 percent) in assessments settled in fiscal years 1999 and 2000. For the 89 cases, a 35-percent discount on approximately $8.4 million in assessments results in possible collections of about $5.5 million. However, CMS actually discounted these cases in total by about 69 percent, reducing potential collections by $2.9 million. According to CMS, other matters can develop while an administrative hearing is pending that can affect the settlement amount, such as unavailability of witnesses and new information related to the deficiencies. In these cases, according to CMS, it may be in CMS’ best interest to settle for less, given the cost of litigation and the risk of not collecting anything. However, CMS does not have debt collection policies and procedures for instances in which a discount greater than 35-percent is allowed. Of the three selected regional offices, one regional office was maintaining documentation to support that such settlements were warranted, while two regional offices were not maintaining documentation. Consistent with good management practices and the Standards for Internal Control in the Federal Government,8 when exceptions to a stated management policy occur, typical control practices are to document, review, and approve such exceptions to ensure that management’s objectives are being met. GAO/AIMD-00-21.3.1, November 1999. Beginning with fiscal year 1997, CMS and HHS have annually disclosed CMP receivables information in their financial reports. In accordance with requirements of DCIA and Treasury guidance, CMS reports receivables information quarterly, which includes CMP, to Treasury in the Report on Receivables Due from the Public. The information reported to OMB and Treasury needs to be considered in light of the reliability issues we identified (see slides 27-37). OMB and Treasury are provided with information useful in performing CMP debt oversight roles. However, in discussions with OMB officials, they emphasized that OMB’s oversight is broad and consists of monitoring and evaluating governmentwide credit management, debt collection activities, and federal agency performance. OMB also stated that it is the specific responsibility of the agency Chief Financial Officer and program managers to manage and be accountable for the debt collection of their agency’s credit portfolios in accordance with applicable federal debt statutes, regulations, and guidance. OMB further added that it is the role of each agency to specifically monitor and collect its civil penalty debt regardless of dollar magnitude and the responsibility of each agency’s OIG to provide oversight through audit of the agency’s debt collection activities. DCIA requires federal agencies to transfer eligible nontax debt or claims over 180 days delinquent to Treasury for collection action. DOJ stated that referral to Treasury was one type of debt collection tool used by USAOs when pursuing collection of fraud cases.10 CMS stated that CMS is not referring LTC CMP debt to Treasury for collection actions. CMS stated that it plans to refer eligible LTC CMP debts to Treasury in the future and is currently researching the issue. HHS OIG stated that, as of September 30, 2000, it did not have any eligible delinquent CMP debt for the cases in which the OIG tracks collections. Treasury stated that it relies on the agencies to determine what debt should be referred to Treasury for collection action, as required by DCIA. in fraud and abuse debt is the primary reason for the increase in CMP receivables over the last several years. In addition, our work found similar financial accountability and reporting issues as those reported for non-CMP receivables and that a CMP debt collection policy and procedure can be strengthened. As long as CMP receivables continue to be considered immaterial in the judgment of CMS’ external financial statement auditors, minimal audit coverage will be provided in this area. Therefore, CMS management needs to take steps to improve the accounting and reporting of CMP receivables. actions. Establish and implement formal written accounting and reporting (1) comparing CMP receivables reported in CMS’ audited financial statements and HHS’ accountability report, (2) reconciling CMP receivables between CMS’ general ledger and the detailed subsidiary systems, (3) recording of LTC receivables in FACS since LTC CMP receivables currently are not recorded in FACS until a collection is made, and (4) ensuring the accurate recording of information into FACS. Determine an approach for assessing the collectibility of outstanding amounts so that a meaningful allowance for uncollectible accounts can be reported and used for measuring debt collection performance. In addition, establish formal written policies and procedures to ensure that the allowance for uncollectible CMP debts is properly determined using such an approach. Establish and implement formal written debt collection policies and (1) handling instances in which a discount greater than 35-percent is allowed, including the documentation, review, and approval of such settlements, and (2) referring eligible LTC CMP debt to Treasury as required by DCIA. In commenting on these briefing slides, CMS agreed with all but one of our recommendations. CMS did not agree with our recommendation to establish and implement debt collection policies and procedures for instances in which a discount greater than 35-percent is allowed. According to CMS, flexibility is needed in the settlement process and issuing policies and procedures on settlements would add rigidity to the process. It was not our intent that a rigid process for determining settlement amounts would be implemented. However, consistent with good management practices and the Standards for Internal Control in the Federal Government, when exceptions to a stated management policy occur, typical control practices are to document, review, and approve such exceptions to ensure that management’s objectives are being met. CMS also stated that the non-CMP issues reported by CMS’ external financial statement auditors have no correlation to the CMP issues discussed in the slides. We disagree. Even though these are two different types of debt, the underlying financial accountability and reporting issues are similar. For example, as discussed on slide 18, the external financial statement auditors reported that non-CMP contractors are unable to reconcile reported ending balances to the contractors’ subsidiary records. Our review also found reconciliation problems with the CMP receivables. As discussed on slide 32, as of September 30, 2000, CMP receivables balances in FACS and the detailed subsidiary systems differed by a net of about $22 million. The following are our comments on the Centers for Medicare and Medicaid Services’ letter dated August 30, 2001. 1. We subsequently combined recommendations 1 and 3 together under recommendation 1 in order to group related topics. CMS agreed with both recommendations. 2. Recommendation 4 was subsequently renumbered as recommendation 3 due to the combining of recommendations 1 and 3. 3. See the “Agency Comments and Our Evaluation” section.
This report focuses on the debt collection processes and procedures used by the Department of Health and Human Services' (HHS) Centers for Medicare and Medicaid Services (CMS). The primary reason for the growth of CMS' civil monetary penalties (CMP) receivables was the expansion of fraud and abuse detection activities from fiscal year 1995 through fiscal year 1997 that significantly increased reported fraud and abuse debts in fiscal year 1997. GAO's analysis of CMS' CMP receivable data revealed similar financial accountability and reporting issues as those identified for non-CMP receivables by CMS' external financial statement auditors. GAO identified (1) unreconciled differences of tens of millions of dollars in the CMP receivables balances reported by HHS and CMS for fiscal years 1997 through 1999 and (2) an unreconciled net difference of about $22 million between the CMP receivables balance in CMS' general ledger and the detailed subsidiary systems as of September 30, 2000. The data reliability issue prevented GAO from determining the overall adequacy of the CMP debt collection policies and procedures. However, GAO's limited tests showed that debt collection policies and procedures were followed for 11 of the 12 selected delinquent debts. GAO could not determine whether debt collection policies and procedures were followed for the 12th selected debt because supporting documentation was unavailable.
Conditions in Cuba continue to pose substantial challenges for U.S. assistance. Cuba is a Communist state that restricts nearly all political dissent on the island; tactics for suppressing dissent in Cuba include surveillance, arbitrary arrests, detentions, travel restrictions, exile, criminal prosecutions, and loss of employment. Furthermore, there is no free press in Cuba, and independent journalists and activists are harassed and imprisoned. The Cuban government substantially restricts and controls the flow of information, limiting access to the Internet, cell phones, radio antennas, and other items, and restricting their use through high costs, punitive laws, and the threat of confiscation. Moreover, the government routinely jams all external, non-Cuban broadcasts, including the U.S. government-supported Radio and TV Martí broadcasts. The United States, which maintains an embargo on most trade with Cuba, does not have diplomatic relations with the Cuban government. Consequently, USAID does not work cooperatively or collaboratively with Cuban government agencies, as it does in most other countries receiving U.S. democracy assistance. USAID does not have staff in Cuba, and State does not have staff dedicated to the Cuba democracy program in Havana. USAID and State program staff have been unable to obtain visas to visit Cuba over the past decade, which poses challenges for program implementation, monitoring, and evaluation. In addition, Cuban law prohibits citizens from cooperating with U.S. democracy assistance activities. In December 2009, a subcontractor working for one of USAID’s partners was arrested in Cuba while delivering computer equipment to provide Internet access to Jewish communities on the island. He was subsequently sentenced to 15 years in prison for “acts against the independence or the territorial integrity of the state.” Several USAID and State bureaus and offices implement Cuba democracy assistance efforts, including soliciting proposals, competitively awarding funds, and monitoring program implementation. The Latin American and Caribbean Bureau (LAC), Office of Cuban Affairs, is chiefly responsible for implementing Cuba democracy assistance efforts. The Management Bureau has various offices that also assist in overseeing program awards and contracts. The Office of Transition Initiatives (OTI) oversaw implementation of a single contract from fiscal years 2009 through 2012. OTI’s Cuba program efforts were envisioned from their inception to be temporary, as is typically the case with OTI’s programs, which generally aim to provide short-term assistance. The Bureau of Democracy, Human Rights, and Labor (DRL) is responsible for managing and overseeing the majority of State’s Cuba democracy assistance program activities. The Bureau of Western Hemisphere Affairs (WHA), including the U.S. Interests Section in Havana, Cuba, (USINT), also manages and oversees Cuba program activities., The Bureau of Administration assists DRL and WHA in the financial management and oversight of Cuba democracy assistance awards. Reporting on Cuba democracy assistance in 2006, we found that USINT delivered some assistance to independent groups and individuals in Cuba, including assistance provided by USAID- and State-funded awardees. Because of heightened security concerns, USINT no longer has a role in implementing assistance for USAID and State/DRL partners. However, USINT continues to provide information on conditions in Cuba, facilitates and assists with State/WHA training courses, and supports civil society in Cuba. Table 1 outlines key USAID and State roles and responsibilities for providing U.S. democracy assistance for Cuba. In addition, the National Endowment for Democracy (NED), an independent, nongovernmental organization, funds programs to promote democracy in Cuba through both direct congressional appropriations and with funding that it receives through State/DRL. Program partners—such as nongovernmental organizations, universities, and development contractors—and subpartners also have roles and responsibilities for Cuba democracy program assistance. USAID and State provide funding for partners through various mechanisms, including grants and cooperative agreements (together referred to as “awards” in this report) and contracts. Partners may also award funding to subpartners to assist them in implementing program efforts. Subpartners may include consultants, subcontractors, subgrantees, and recipients of grants under contract. USAID and State support democracy for Cuba by providing awards and contracts to partners with objectives related to developing civil society and promoting freedom of information. USAID receives the majority of funding allocated for this assistance, although State has received 32 percent of funding since it started taking part in the program in 2004. Since 2008, USAID and State have awarded more funds to larger organizations with a worldwide or regional presence than to the other two categories of typical awardees: universities, and smaller organizations that focus only on Cuba. Under 21 of the 29 recent awards and contracts that we reviewed, partners used subpartners to implement program activities and obligated about 40 percent of the funding associated with these awards and contracts to subpartners. Worldwide or regional organizations provided more than 90 percent of the funding provided to subpartners. USAID’s and State’s democracy assistance efforts for Cuba generally focus on developing an independent civil society and promoting freedom of information in Cuba. The overall goal and guiding principle of U.S. democracy assistance for Cuba is to improve the effectiveness of citizens to participate in activities affecting their lives and to increase access to information. Efforts to develop Cuban civil society include training in organizational and community development, leadership, and advocacy. Related material assistance may include the provision of books, pamphlets, movies, music, and other materials that promote democratic values. In addition, efforts to promote freedom of information have included the following, among other activities: information technology training for Cuban nationals, ranging from basic computing to blogging; provision of material assistance. journalism training; support for independent publications; and USAID and State officials noted that in recent years, program efforts have included a greater focus on information technology, particularly on supporting independent bloggers and developing social networking platforms on the island. Several partners we reviewed received funding to support international solidarity activities, although agency and partner officials indicated that the program recently has reduced its focus on off-island activities to foster support for democracy in Cuba. Activities of this type that State/DRL funded in fiscal years 2009 through 2011 included the following: an essay contest for Latin American youths related to Cuba Solidarity exhibits and documentaries presented outside Cuba for the purpose of bringing awareness globally to Cuban human rights issues and civil society development. In recent years, USAID and State had few awards or contracts focused solely on such humanitarian assistance as assistance for political prisoners and their families, according to agency officials. USINT officials noted that humanitarian assistance has declined along with a decrease in the number of political prisoners in Cuba. Officials added that USINT itself no longer provides any humanitarian assistance on the island. To broaden reach and impact, Cuba democracy assistance efforts have expanded beyond a focus on traditional activists to include groups such as poor and rural communities, religious organizations, small businesses, and information technology enthusiasts. Typical program beneficiaries also include Cuban community leaders, independent journalists, independent bloggers, women, and youths. Table 2 summarizes information on recent program assistance and target beneficiaries. In fiscal years 1996 through 2011, Congress appropriated $205 million for Cuba democracy assistance, appropriating 87 percent of these funds since 2004. Increased funding for Cuba democracy assistance was recommended by the interagency Commission for Assistance to a Free Cuba, which was established by President George W. Bush in 2003. Program funding, which peaked in 2008 with appropriations totaling $44.4 million, has ranged between $15 and $20 million per year during fiscal years 2009 through 2012. For fiscal year 2013, USAID and State reduced their combined funding request to $15 million, citing operational challenges to assistance efforts in Cuba. In fiscal years 1996 through 2011, $138.2 million of Cuba democracy funds were allocated to USAID and $52.3 million were allocated to State (see fig. 1). When the Cuba democracy program began in 1996, USAID was the only agency involved and USAID/LAC was the only programming bureau. USAID/LAC has received the largest total amount of program funding and has continued to receive the largest annual amount, averaging $12.1 million annually since fiscal year 2004. USAID/OTI received program funding totaling $14.3 million from the appropriations for fiscal years 2007 through 2010. State has received 32 percent of Cuba democracy funding since fiscal year 2004. State/DRL has received an average of $5.8 million annually since fiscal year 2004. State/WHA has received an average of $1.4 million annually since fiscal year 2008. USAID and State have awarded funding for Cuba democracy assistance to three categories of partners: (1) Cuba-specific nongovernmental organizations (NGO), (2) worldwide or regional organizations, and (3) universities. USAID’s and State’s awards and contracts tended to share certain characteristics, such as their broad objectives and amounts awarded, depending on the type of partner. Objectives. USAID and State awards and contracts to Cuba-specific NGOs and to worldwide or regional organizations have generally funded similar types of program activities, such as efforts to provide training and material assistance on the island. Awards to universities have tended to have different objectives. In the early years of the Cuba program, awards to universities funded activities such as research on how to promote a democratic transition in Cuba and scholarships to study at universities in the United States. Since the mid-2000s, after finding that the Cuban government would not provide exit visas for Cuban students to study in the United States, USAID and State have awarded funding to universities largely for programs to provide distance learning training to Cubans on the island or courses at universities in other Latin American countries. Amount of award or contract. USAID’s awards and contracts in fiscal years 1996 through 2012 averaged $1.9 million for Cuba-specific NGOs, and $2.1 million for worldwide or regional organizations. State’s awards and contracts averaged $0.8 million for Cuba-specific NGOs and $0.9 million for worldwide or regional organizations. Both USAID’s and State’s awards and contracts to universities averaged $0.8 million. In fiscal years 1996 through 2012, USAID and State had a combined total of 111 awards and contracts to 51 partners representing all three types of organizations (see fig. 2). Many of the awards were concentrated among certain partners, with 25 of these partners receiving multiple awards from USAID, State, or both. For example, one partner received a combined 11 awards from USAID and State, more than any other partner, and 10 of the 51 partners received 67 percent of total funding to partners. Since fiscal year 2008, regional or worldwide organizations have had more active USAID and State awards and contracts each year, and have received more funding, than Cuba-specific NGOs or universities. Prior to 2008, Cuba-specific NGOs had more active USAID awards than the other categories of recipients in most years. However, the program’s partners have consistently included worldwide or regional organizations, some of which have a history of working on Cuba issues (see fig. 3). For example, for awards that began in fiscal years 1996 through 2007, Cuba-specific NGOs received 48 percent of award funding, worldwide or regional organizations received 43 percent, and universities received 9 percent. In contrast, for awards made since fiscal year 2008, worldwide or regional organizations received 74 percent of award and contract funding, while Cuba-specific NGOs received almost 17 percent, and universities received almost 10 percent. As we previously reported, this greater use of worldwide or regional organizations, which began in 2008, reflected more formal requirements for submitting proposals and USAID’s decision to fund awards and contracts that incorporate capacity building for subpartners as an important element. Many partners, and worldwide or regional organizations in particular, use subpartners to help carry out their Cuba democracy assistance work. We reviewed 29 recent awards and contracts to determine the extent to which partners use subpartners to implement program activities. We found that partners used subpartners under 21 of the 29 awards and contracts, obligating about 40 percent of the funding under these awards and contracts to subpartners. On average, partners that used subpartners under an award or contract had 12 subpartners. However, the numbers of subpartners under each of the 21 awards varied: Four awards had one subpartner. Seven awards had between two and nine subpartners. Ten awards and contracts had more than 10 and up to 38 subpartners. The purposes of subawards and subcontracts also varied greatly. Many subawards and subcontracts were for discrete activities, such as to conduct workshops. Other subawards and subcontracts covered an array of tasks, such as content development and instruction for a distance learning course or development, training, and support for civil society networks. Accordingly, subpartners included different types of non-profit and for-profit organizations as well as individuals who worked as consultants that provided the skills necessary to implement the varying activities. Furthermore, the amount of funding that went to subpartners ranged from less than $5,000 to several hundred thousand dollars. For six of the 21 awards and contracts with subpartners, the majority of program funding was obligated to subpartners. In such cases, subpartners generally performed all or most of the programmatic functions under the overall award or contract, while the partners’ main functions were to provide strategic direction of the overall award or contract and to perform management functions such as reporting to the agency and overseeing their subpartners. Worldwide or regional organizations were more likely to use subpartners than were the other categories of organizations. In total, 93 percent of the subawards and subcontracts were awarded by worldwide or regional organizations. Also, on average, worldwide or regional organizations had 12 subpartners for each of their awards or contracts, while Cuba-specific NGOs had three and universities had five. Correspondingly, five of the six partners that obligated the majority of their funding to subpartners were worldwide or regional organizations. USAID and State legal officials view the Cuba program’s authorizing legislation as providing the purposes for which foreign assistance funds may be used and allowing discretion to determine which program activities will be funded. The officials stated that they view the types of activities listed in section 109(a) of the Helms-Burton Act as illustrating, not limiting, the types of program assistance that the agencies can provide. Specific authority for Cuba democracy assistance activities was provided in section 1705 of the Cuban Democracy Act of 1992 and in section 109(a) of the Helms-Burton Act in 1996. Section 1705 authorizes the donation of food for NGOs and individuals in Cuba; exports of medicines and medical supplies, instruments, and equipment; and assistance to appropriate NGOs to support efforts by individuals and organizations to promote nonviolent democratic change in Cuba. Section 109(a) authorizes assistance and other support that may be provided, such as published informational matter for independent democratic groups, humanitarian assistance for victims of political repression and their families, and support for democratic and human rights groups. USAID and State legal officials said that the agencies ensure that program activities directly relate to democracy promotion as broadly illustrated in related program legislation. For example, the officials noted that the types of activities that fit within the scope of “democracy promotion,” as that term has been broadly defined in various foreign assistance appropriations, would be the types of activities eligible for funding under section 109(a) of the Helms-Burton Act. They added that, while the agencies have not compiled a list of activities that will be approved or not approved for funding under the Cuba program, proposed or approved activities are set forth in agency congressional notifications and listed in individual requests for proposals or applications and in award agreements and contracts. In addition, they said that organizations are expected to work with agency program officers to determine what activities are permitted or appropriate, and whether Department of Treasury and Commerce authorizations, as required, already exist for delivery of various types of assistance or whether the organization must instead apply for a license. Furthermore, they noted that program partners and subpartners, including subpartners based in other countries, are expected to spend U.S. government funds consistent with U.S. laws and that requirements in primary award agreements and contracts generally flow down to any subpartners. Since 2008, USAID has worked to improve performance and financial monitoring of its Cuba program partners. However, we found gaps in State’s financial monitoring efforts. For performance monitoring, we found some deficiencies in the performance planning and reporting conducted by USAID’s and State’s partners in our nongeneralizable sample, but both agencies are taking steps to improve their performance monitoring. For financial monitoring, USAID has hired an external auditor to perform financial internal controls reviews of its partners, and has used a risk- based approach considering criteria such as award value and prior issues identified to determine the coverage and frequency of the 30 reviews the auditor has conducted. These reviews have identified financial management, procurement, and internal controls weaknesses that USAID has taken steps to address. While State conducted no financial internal controls reviews for at least two-thirds of its partners between fiscal year 2010 and 2012, State recently hired an external auditor to perform such reviews starting in fiscal year 2013 and has taken steps to implement a risk-based approach to prioritize the scheduling of its reviews. Specifically, State plans to complete reviews for three-quarters of State/DRL’s partners and none of State/WHA’s partners. In addition, in accordance with federal regulations, the agencies approve partner requests to award funding to specific subpartners. In June 2011, USAID provided specific written guidance to its partners on what USAID requires for approval of subpartners. State has provided limited written guidance on approval to some partners, which does not clearly inform partners of the specific types of information State requires for approval. As a result, State was not provided with the detailed information that officials told us would have been required for State to have approved 91 subawards and subcontracts that were obligated under eight of its recent awards. USAID has taken steps to improve its ability to monitor its Cuba program partners’ performance, by working with them to improve their performance planning and reporting. USAID has numerous requirements for partners’ performance planning and reporting, the key elements of which are summarized below. Performance Planning: USAID directs its Cuba program partners to establish monitoring and evaluation (M/E) plans that include certain specific characteristics. USAID works with partners to include more detailed information on indicators in their M/E plans. USAID/LAC also required its one contractor to perform data quality assessments on its performance data. Performance Reporting: USAID requires awardees to submit progress reports on a quarterly basis and requires contractors to submit monthly and annual progress reports, among others. USAID uses information in these performance reports to track the progress of individual awards and contracts and to track the progress of the overall Cuba program. According to USAID officials, USAID first reviews the reporting to compare it against targets set in partners’ M/E plans. In addition, USAID analyzes and aggregates the information reported by partners to track performance for USAID’s Cuba program and to report to State’s Office of U.S. Foreign Assistance Resources on government-wide performance. We reviewed the M/E plans and progress reports for the five USAID awards or contracts in our nongeneralizable sample, which began in fiscal year 2008 or 2009 (see table 3). We found some weaknesses in the partners’ M/E plans but found detailed reporting against indicators in the progress reports we reviewed. For example, our analysis indicates that all M/E plans we reviewed included clearly defined indicators for program activities. However, not all partners specified targets and data collection methods for each indicator in their M/E plans. Establishing targets for indicators during the planning stage is important because targets form the standard against which actual results will be compared and assessed. Specifying data collection methods for each indicator enables the agency to determine whether it will be realistic for the partner to measure performance for that indicator in a timely manner. One partner included no information on data collection methods in their M/E plan, while another partner only included general information on their planned data collection methods, such as by stating that the data would be collected by subpartners. Based on review of the partners’ progress reports, we found that all partners in our sample reported to USAID on progress through quantitative updates against each indicator, allowing USAID to gauge the specific progress made during each reporting period. One partner also reported progress for each individual subpartner, including by reporting the number of each subpartner’s beneficiaries disaggregated by target group. Partners’ progress reports also provided narrative information describing program activities, challenges encountered, and planned activities for the next reporting period. Although we found some gaps in these partners’ performance planning, USAID/LAC has been working to improve the quality of performance information that it receives from its partners, with a particular emphasis since 2010 on improving their M/E plans. To improve M/E plans and partners’ reporting based on those plans, in 2010, according to USAID, USAID/LAC conducted in-depth assessments of each of its partners’ M/E plans to determine whether they included indicator tracking tables, definitions of indicators, data collection responsibilities, data quality limitations, and other key information. Also, in September 2010, USAID/LAC hired an M/E contractor to work with each of its partners to further improve and standardize their performance management systems. The M/E contractor has worked with partners to identify and track the most appropriate indicators, including any applicable standardized indicators that USAID/LAC can aggregate across the partners to determine its own overall progress. In 2011, this M/E contractor also provided training to each partner and helped them to improve their M/E plans, for example by specifying quarterly targets and data collection plans for each indicator. According to M/E contractor representatives, partners’ performance planning has improved, although additional improvement is needed in the quality of some partners’ data. In fiscal year 2013, the M/E contractor plans to perform data quality audits of the partners. State has also made some recent improvements to performance monitoring of its Cuba program, in the areas of both performance planning and reporting. State’s requirements for performance planning and reporting include: Performance Planning: State/DRL and State/WHA have provided different requirements for prospective partners regarding elements of M/E plans. State/DRL. In 2010, State/DRL increased the level of requirements for prospective partners’ M/E plans through the request for proposal (RFP) it issued that year. Previously, State/DRL required that prospective partners submit an M/E plan but did not specify characteristics that the M/E plan should include. The RFP issued in 2010 specified that M/E plans should include a baseline and target for each indicator and data collection methods and sources, among other characteristics. In addition, the RFP referenced M/E guidance available on State/DRL’s website that included more details on how to develop an effective M/E plan. State/WHA. For the State/WHA award in our sample, the RFP issued in fiscal year 2010 required the prospective partner to submit an M/E plan outlining performance indicators, sources and means for verification, risks and assumptions for goals and objectives, and expected results and activities. For WHA’s most recent RFPs issued in fiscal year 2012, State/WHA also included additional guidance for prospective partners’ M/E plans, for example, including by defining indicators and providing a sample M/E plan. In addition, State/WHA further clarified that all indicators in M/E plans must include measurable, numerical targets. Performance Reporting: Both State bureaus require their partners to submit quarterly progress reports. According to State/DRL and State/WHA officials, they review partners’ quarterly reports against the partners’ planned performance to confirm that the awards are making progress toward established targets and that activities align with the award’s objectives. State officials then analyze the quarterly progress reports to aggregate progress for each of its bureaus with USAID’s progress to be able to report government-wide performance for the Cuba democracy program to State’s Office of U.S. Foreign Assistance Resources. State’s partners that we reviewed ranged in the amount and kind of detail they included in their M/E plans as well as in their progress reports. For the State awards in our sample, we found that State/WHA’s partner had the most clearly defined M/E plan (see table 4). This partner’s M/E plan included specific and clearly defined indicators, targets against which the partner could measure its performance, and clear plans for data collection. For the four State/DRL awards, the partners included indicators in their M/E plans but did not define them. We also found that one partner with three State/DRL awards did not set clear targets for a number of its indicators. In addition, two partners identified some data collection methods in their M/E plans, but did not clearly identify which methods would be used to collect data for each individual indicator. We also found that, for State/WHA’s award, progress reports included detailed reporting against each indicator, as well as additional qualitative and quantitative information on overall progress including survey results and statistics. On the other hand, reporting for the State/DRL awards lacked such detail. For example, for one State/DRL award, the partner tracked its performance in quarterly reports for only 3 of the more than 10 indicators in its M/E plan. For three other awards, the partner did not aggregate or track performance against any specific indicators in their progress reports. While not reporting on progress against specific indicators, these partners generally reported anecdotally on the topics covered in the indicators or scattered some performance data throughout their reports. In September 2012, State/DRL awarded a contract to a firm specializing in M/E, which could address such gaps in its partners’ performance planning and reporting. In the area of performance planning, State/DRL has directed the M/E contractor to provide training and technical assistance to its partners to improve their M/E plans, such as to ensure they include information on data collection methods. In addition, State/DRL directed the M/E contractor to develop indicators for all State/DRL partners to report on that meet or surpass data quality standards. This should allow State/DRL to more easily aggregate information on the overall performance of its Cuba program partners. The partners in our sample had various policies, procedures, and mechanisms in place for monitoring subpartner performance, which they compiled information on to report to the agencies. We found that partners generally required subpartners to report on their activities quarterly or monthly, and at the end of a subaward or subcontract. Some partners also required subpartners to submit trip reports after any travel to Cuba. Other monitoring practices cited by partners included site visits to subpartners and frequent communication via phone, email, or in-person. Because of security concerns and limited on-site monitoring in Cuba, partners and subpartners use a variety of methods to verify the delivery of assistance to Cuban beneficiaries. Representatives from USAID’s M/E contractor indicated that partners have had difficulty collecting and reporting data because of Cuban beneficiaries’ reluctance to maintain and provide specific information in writing (e.g., timesheets, attendance sheets, or other documents naming beneficiaries). However, the M/E contractor has found that partners and subpartners often communicate with beneficiaries though various means. Similarly, according to representatives of partners and subpartners we interviewed, some delivery verification methods they used included the following: having future travelers ask beneficiaries how they used assistance observing beneficiaries’ use of assistance through remote or indirect means—for example, through articles published online that demonstrate that beneficiaries received training. Partners generally aggregate information obtained from such methods in their progress reports to the agencies. However, in certain instances, the monitoring methods selected have limited subpartners’ ability to track and report detailed information. For example, one subpartner reported on an indicator, the number of signatures on petitions, by providing data that it obtained over the phone (instead of through document reviews that would prevent double counting of signatures and allow for other data quality checks). In addition, USAID’s M/E contractor has found cases in which data could not be transmitted in a timely manner, preventing reporting on activities in the quarter when they were implemented. As a result, both the partners and the agencies can have difficulty knowing the exact numbers and identities of beneficiaries in Cuba. Since 2008, USAID has made improvements to financial monitoring of its Cuba program partners. In April 2008, USAID/LAC hired an external auditor to perform financial internal controls reviews of its partners to ensure that they have appropriate internal controls and to review selected transactions under the program to ensure that they are allowable, allocable, and reasonable. Since 2008, this auditor has conducted 30 audits across 13 of the 16 partners USAID funded during fiscal years 2007 through 2010. These audits are in addition to audits performed by USAID’s Management Bureau and its Inspector General. Across its different auditing entities, USAID has a goal of reviewing each partner approximately once every 6 months. Other risk-based factors are considered in the scheduling and sequencing of reviews, such as preaward reviews, prior audit findings, and period of performance. Through December 2011, the external auditor had found 50 instances of unsupported costs, such as insufficient documentation and lack of authorization, and 15 instances of excessive costs charged to an award or contract, such as charging incorrect rates or expenses not allocable to the award or contract. In sum, the external auditor questioned 11 percent of the charges made to USAID/LAC during the external auditor’s periods of review. In addition, the external auditor found inadequacies in the following three main areas: procurement standards at 8 partners, and the financial management systems at 11 partners, internal controls at 8 partners. Two of the external auditor’s most common specific findings were that (1) partners did not properly complete their quarterly financial reports, and (2) partners did not perform a cost-price analysis before procuring a subpartner or equipment to ensure that it was procured at a fair price. Specifically, the external auditor found that four partners did not provide any documentation of a cost-price analysis. In addition, the auditor found that another four partners had insufficiently completed or documented cost-price analyses, either by performing them verbally but not documenting them or by having unorganized or unexplained documentation of the cost-price analysis, limiting the external auditor’s ability to confirm the reasonableness of the costs in question. As a result of the external auditor’s findings, USAID/LAC has provided training to partners and, according to USAID/LAC’s external auditor, partners have made improvements. First, USAID/LAC asked the external auditor to provide briefings to the Cuba program partners at their December 2008 and March 2011 quarterly meetings on topics such as unallowable expenses, internal control standards, and procurement regulations. In addition, our review of audit reports issued from 2008 through 2011 showed that the external auditor found fewer inadequacies at some partners that had previously been audited. An official with USAID’s external auditor who is responsible for these audits noted that recent reviews have found that the partners have improved their financial management capacity. USAID/OTI used other processes to regularly monitor the financial performance of its partner under the Cuba democracy program. According to USAID/OTI, USAID/OTI staff worked closely with their Cuba program partner to plan future expenditures and reviewed documentation related to individual subpartners to determine each subpartner’s real costs. In addition, USAID/OTI officials maintained a database that the partner updated on a weekly basis, allowing USAID/OTI to monitor all expenditures weekly. Based on our financial internal controls reviews of the five USAID awards and contracts in our nongeneralizable sample (see table 5 in appendix I), we found that partners’ internal controls included (1) policies to prevent the commingling of U.S. government funds, such as unique accounting codes to identify awards and separate bank accounts for U.S. government funds; (2) policy manuals to instruct employees on the proper use of U.S. government funds received through grants and contracts; and (3) procedures to segregate incompatible financial duties. We found that USAID/LAC was overcharged for some overhead and labor expenses by one of USAID/LAC’s partners in our sample. State has not consistently conducted ongoing, in-depth financial monitoring of its Cuba program partners. State’s Bureau of Administration is responsible for conducting financial internal controls reviews of State/DRL’s and State/WHA’s partners. However, the Bureau of Administration has conducted financial internal controls reviews of less than one-third of partners active since fiscal year 2010. For State/DRL awards, the Bureau of Administration conducted one review in each fiscal year for 2010 and 2011 and four in July 2012. It conducted no reviews of State/WHA partners. According to State officials, the Bureau of Administration attempts to conduct financial internal controls reviews at least once during the course of each DRL and WHA award but has not done so for many of its awards because of staffing turnover and constraints in the Bureau of Administration. In September 2012, State/DRL awarded funding to an external auditor to perform financial internal controls reviews in fiscal year 2013. During this fiscal year, State intends for the auditor to perform one review of three- quarters of State/DRL’s partners and no reviews of State/WHA’s partners. State provided documentation to us showing that, in October 2012, State/DRL worked with the external auditor to develop a preliminary plan to select the ordering of its partners to be reviewed using a risk- based approach that considered criteria such as the value of awards, any prior financial compliance issues identified, and the partners’ internal administrative capacity. We conducted financial internal controls reviews on three partners with five State awards in our nongeneralizable sample (see table 5 in appendix I). These three partners had been recently reviewed by USAID/LAC’s external auditor, because they have also been USAID/LAC awardees, and had made internal control improvements in response to the auditor’s findings. Similar to our review of USAID’s partners, we found that partners had internal control mechanisms in place, including (1) policies to prevent the commingling of U.S. government funds, (2) policy manuals to instruct employees on the proper use of U.S. government funds, and (3) procedures to segregate incompatible financial duties. However, State has had eight partners with nine State/DRL and State/WHA awards active in fiscal years 2011 or 2012 that have received no financial internal controls reviews during the course of their awards, through either State’s Bureau of Administration or USAID’s external auditor. Our review of the six partners in our nongeneralizable sample found that partners had written policies and procedures for financial monitoring of their subpartners’ use of program funding. For example, partners had risk assessment processes to determine the level of monitoring required for a certain subpartner, depending on that subpartner’s capacity and the type of subaward or subcontract. In addition, some of the partners required certain types of subpartners to provide receipts to document 100 percent of expenses. To test the partners’ application of their financial monitoring policies and procedures, we conducted reviews of 11 subpartners under the six partners in our sample. Generally, all partners maintained the necessary documentation (i.e., receipts, timesheets, authorizations) to support expenses incurred at the subpartner level. We found that partners maintained varying levels of documentation on cost-price analyses performed and that one partner had incomplete documentation for one of its subpartner’s expenditures. For three of the five subpartners with fixed-price subcontracts in our sample, documentation supporting the partners’ cost-price analyses included (1) the actual amounts paid for similar services to subpartners on previous awards, (2) price quotes to procure supplies and equipment from various vendors, or (3) surveys demonstrating the market value of labor paid for different labor categories to substantiate that the amounts were within industry standards. For two of the five subpartners with fixed-price subcontracts within our sample, the partners documented that they believed the price of the subcontract to be fair based on the partner’s prior experience. One subpartner of a USAID/LAC partner submitted its receipts in a foreign language that staff at the partner could not read and provided little explanation of the receipts. USAID and State have no direct relationships with their partners’ subpartners. Partners are responsible for all oversight of their subpartners and for reporting to the agencies any updates and problems related to the subpartners’ work, such as through any quarterly reports and site visits. However, the agencies are generally required to approve any partner requests to award funding to subpartners. USAID has provided guidance to its Cuba program partners on what is required for approval of subpartners, both during the preaward phase and during the course of the award. Preaward phase. According to USAID officials, subpartners can be considered pre-approved during the preaward phase if they are described in award proposals, in accordance with the standard provisions that are referenced in partners’ awards. USAID officials indicated that some of USAID’s partners in the past had interpreted the term “described” to include any reference in a proposal to a subpartner’s activities, even if the proposal did not specify that the work would be completed by a subpartner or provide specific information about the subpartner. As a result, USAID learned— through reviews conducted since May 2009 by its external auditor and its own subsequent reviews—that some partners had subcontracts and subawards USAID had not approved. In response, in June 2011, USAID added guidance to its requests for applications (RFA) on the type of information that partners must submit in order to receive prior approval for all types of subpartners. This information includes, for example, the name of the proposed subpartner, a description of the work to be performed by the subpartner under a detailed, line-item budget. the award, the total estimated cost to be paid to the subpartner, and In some cases, USAID accepts this information orally if the partner is concerned that the leaking of this information could compromise the security of individual consultants who travel to Cuba. To provide further clarity to partners on whether or not USAID considers subpartners as approved during the preaward phase, USAID stated in its RFAs issued in fiscal year 2011 that a subaward or subcontract is not considered approved until the USAID Agreement Officer in the Management Bureau signs a letter approving it. Award phase. For USAID approval during the award phase, the partner must submit all of the same information as required during the preaward phase, as well as a copy of the proposed agreement with the subpartner and documentation of the process through which the subaward or subcontract was procured. In May 2011, to further its understanding of the work to be performed by its subpartners, USAID/LAC set up a technical evaluation committee. For ongoing awards, partners submit proposed subpartners for approval to the committee. The committee may ask for information on proposed subpartner activities, among other things, to ensure that the programmatic content of the work to be performed by subpartners fits in the scope of the overall award. State’s Bureau of Administration is responsible for approving subawards and subcontracts under State/DRL and State/WHA awards, and has requirements similar to USAID’s. However, State does not clearly inform its Cuba program partners of these requirements in written guidance and, as a result, some partners have not provided the required information. According to language in State’s Standard Terms and Conditions, prior written approval is required for any subawards or subcontracts unless they are described in the application and funded in the approved award. Specifically, State officials told us that the Bureau of Administration requires certain information and documentation to approve subpartners, including a copy of the draft agreement with the subpartner, the amount of and budget for the agreement, the name of the subpartner organization, a description of the subpartner’s role. the period of performance, and According to State officials, State’s requirements are currently the same whether a partner obtains preapproval during the preaward phase or during the course of the award. They added that, for consultants under State/DRL awards, the Bureau of Administration does not require their names because of security and sensitivity concerns. Instead, according to officials, the Bureau of Administration requires information on the amount of the consultancy contract, the consultant’s budget, and a description of the consultant’s role and qualifications. Nevertheless, the detailed information that State told us is required to preapprove subawards, subcontracts, and consultants is not specified in written guidance to all partners. For example, in State’s handbook for grant recipients, State informs recipients to provide in award proposals details on any subpartners, but does not specify the type of information to provide. In addition, based on our review of fiscal year 2010 and 2011 award documents, we found that in cases when State/DRL is made aware of a prospective partner’s intention to have a subpartner through review of their proposal, State/DRL included a requirement in the partner’s award to provide a copy of the agreement with that specific subpartner to State within 10 days of its execution. However, State/DRL omitted this requirement from awards for which it was not clear the partner intended to use a subpartner. As a result, we found that many partners only received the broad written guidance in State’s Standard Terms and Conditions and recipient handbook. Based on our analysis of the use of subpartners under recent State awards, we found that State had sufficient information to preapprove the subawards and subcontracts under State/WHA’s awards. However, State did not have the detailed information that, according to State officials, would be required to approve 91 subawards and subcontracts to which partners obligated funding under 8 State/DRL awards. According to representatives of one of the State/DRL partners, they assumed that State was aware of their subawards and subcontracts and considered them preapproved because the partners’ proposals had referenced the types of work to be performed under the award by the subpartners. We found, however, that the partners’ proposals only provided general information for all proposed subpartners, such as an estimated total cost aggregated for all subawards and subcontracts. The proposals did not specify information about individual proposed subawards or subcontracts, such as proposed periods of performance, a description of the work to be performed, or copies of draft subpartner agreements. Officials said that State has provided training to grants officers over time to ensure greater consistency in the application of preapproval requirements. However, we interviewed two partners with ongoing State/DRL awards, and both were still unaware of the information required for subpartner approval. USAID has been implementing Cuba democracy assistance efforts since 1996, and State’s role in the program has increased since it began providing assistance several years later, in 2004. More than $200 million has been provided for these efforts over the past 15 years, with recent growth in the use of worldwide and regional organizations that often use subpartners to help implement program activities. Despite ongoing challenges stemming from the difficult operating environment in Cuba, since our 2006 and 2008 reports, USAID has taken steps to improve its performance and financial monitoring of Cuba democracy program awards. While State has also taken initial steps to improve performance monitoring of its Cuba program awards, we found that State’s financial monitoring was lacking in certain areas. For performance monitoring of Cuba program partners, both USAID and State have required partners to submit program planning and reporting documents that the agencies use to monitor their partners’ implementation of program activities and progress toward program goals. Although we found some gaps in these efforts, such as instances in which partners did not identify targets in performance plans, lacked clearly defined indicators, or did not report on established indicators, both agencies are taking steps to improve performance monitoring of their partners. Specifically, since 2010, USAID has used an external contractor to enhance its Cuba program monitoring and evaluation efforts. In September 2012, State hired an organization for a similar purpose, with work on this effort slated to begin in fiscal year 2013. To enhance financial monitoring, in 2008 USAID hired an external auditor to perform financial internal controls reviews of its Cuba program partners and used a risk-based approach to determine how often each partner should be reviewed to enable more efficient and effective reviews, with resources focused on areas of greater risk. Such an approach considers key factors such as the value of awards, coverage, previously identified deficiencies, award type, and the frequency of the reviews that will be needed. USAID’s auditor conducted 30 audits through fiscal year 2012, which identified questionable charges and weaknesses in partners’ financial management, procurement standards, and internal controls. State did not conduct financial internal controls reviews for more than two-thirds of its awards during fiscal years 2010 through 2012, although State recently awarded funding to an external auditor for this purpose and has taken steps toward implementing a risk-based approach for these reviews. However, because these actions were taken recently, State’s ability to ensure that funds are being spent as intended remains unknown until it has completed these reviews. Moreover, unlike USAID, State has not provided clear guidance to its partners regarding requirements for subpartner approval. As a result, State lacks complete and accurate information on its partners’ use of subpartners to implement program efforts. Without adequate information on program subpartners, State has limited ability to fully understand and assess its partners’ use of program funds. To strengthen State’s ability to monitor the use of Cuba democracy program funds, we recommend that the Secretary of State take the following two actions: To enhance financial oversight, use a risk-based approach for program audits, including those conducted by an external auditor, that considers, among other factors, specific indicators—such as value of awards, prior deficiencies, oversight coverage, and frequency—for each of State’s Cuba program partners. To obtain sufficient information to approve implementing partners’ use of subpartners, provide clear guidance to implementing partners regarding requirements for approval of the use of subpartners, and monitor implementing partners to ensure that they adhere to these requirements. We provided a draft of this report to USAID and State for review and comment. Their written comments are reproduced in appendixes II and III, respectively. USAID noted that it is a challenge to implement assistance programs in countries where USAID does not have dedicated staff in-country, and cited their ongoing commitment to ensuring that Cuba democracy assistance programs managed by USAID receive appropriate management and oversight to minimize waste and mismanagement and maximize impact on the ground in Cuba. USAID highlighted steps that the agency has taken to improve program management, such as dedicating additional resources to conduct financial audits and monitoring of awardees, and conducting pre-award audits on organizations with limited or no experience managing USAID- funded projects. USAID expressed appreciation for our recognition of the agency’s program improvements. State concurred with both of our recommendations and noted relevant actions it has taken or plans to take. Regarding our recommendation to enhance financial oversight through using a risk-based approach for Cuba program audits, State noted that the external auditor that State/DRL recently procured to audit some of its partners has taken steps to implement a risk-based approach. State further noted that the department is evaluating staffing in its Bureau of Administration and audit requirements to be able to address program oversight needs not covered by this external auditor. State also concurred with our recommendation to obtain sufficient information to approve implementing partners’ use of subpartners. State said that it plans to hold meetings with awardees to discuss award requirements and provide an orientation on resources and technical support available to Cuba program awardees. USAID and State also provided technical comments that we have incorporated, as appropriate. We are sending copies of this report to the Administrator of USAID, the Secretary of State, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report (1) identifies the types and amounts of democracy assistance that the United States Agency for International Development (USAID) and the Department of State (State) have provided to Cuba and characteristics of implementing partners, subpartners, and program beneficiaries; (2) reviews USAID’s and State’s efforts to implement the program in accordance with U.S. laws and regulations and to address program risks; and (3) examines USAID’s and State’s monitoring of the use of program funds. This report is a publicly releasable version of a prior GAO report, issued in December 2012, that USAID and State had designated Sensitive But Unclassified. To identify types and amounts of democracy assistance, and characteristics of implementing partners (partners), subpartners, and beneficiaries, we reviewed congressional notifications, agency and partner documents and data on program awards and funding, copies of award agreements and contracts and any modifications, partner interim and final reports, and other key documents and data. We interviewed agency officials and partner representatives to corroborate information and data obtained. We also discussed Cuba democracy assistance with officials at the National Endowment for Democracy. We reviewed amounts of assistance and characteristics of partners that received program funding in fiscal years 1996 through 2012. To test the reliability of funding data, we compiled lists of all funding that went to each partner and sent these lists to USAID and State for verification. In addition, we reviewed the use of subpartners under all of USAID’s and State’s 29 awards and contracts that were active in fiscal year 2011, which were funded with appropriations from fiscal years 2007 through 2009. These awards and contracts were awarded to 22 partners. We obtained information and data from each of the partners on their use of subpartners under the respective awards and contracts. For the purposes of our review, we defined subpartners as recipients of subawards, including subgrants, grants under contract, subcontracts, or consultants. To test the reliability of subpartner data, we compared information obtained from partners to agency information, and interviewed agency and partner officials regarding any discrepancies. We determined data on program funding and on the use of subpartners were sufficiently reliable for the purposes of this report. To review USAID’s and State’s efforts to ensure that program implementation is consistent with U.S. laws and regulations, and to provide guidance to partners and subpartners regarding program risks, we reviewed relevant U.S. laws and regulations and agency and departmental policies and procedures. We also interviewed USAID and State legal and program officials regarding program implementation and related risks. For selected partners and subpartners, we reviewed award agreements; contracts; and partner guidance, policies, and procedures regarding program security risks and travel security and safety. We also interviewed representatives of partners and subpartners regarding program security risks and traveler safety and security measures. We analyzed reported activities and assistance delivered, and management and internal controls for a nonprobability, nongeneralizable sample of six USAID and State partners and 11 of their subpartners, in order to assess performance and financial monitoring and oversight of their awards and contracts. While the results of our analysis of these six partners are not generalizable to the population, we selected this nonprobability sample to be generally reflective of other partners in the population and to cover a large proportion of the overall dollar value of aid. We selected at least one partner from each of the four USAID and State bureaus and offices implementing program assistance—USAID’s Bureau of Latin American and Caribbean Affairs (LAC); the Office of Transition Initiatives (OTI) within USAID’s Bureau for Democracy, Conflict, and Humanitarian Assistance; State’s Bureau of Democracy, Human Rights, and Labor (DRL); and State’s Bureau of Western Hemisphere Affairs (WHA). While there were in total 29 partners from which we selected, the six selected partners were among the top 15 recipients of program funding awarded in fiscal years 2007 through 2010, and represented about 60 percent of funding for awards active in fiscal year 2011. Other factors that we considered in selecting partners included the timing of the awards and contracts—we selected partners with awards or contracts active in fiscal year 2011—and other strategic factors, such as the type of activity planned under the award or contract and whether the partner had ongoing or new program activities planned for fiscal year 2012 and beyond. We judgmentally selected two subpartners for further review under each of the partners, except for the one partner that only used one subpartner. To select subpartners, we considered factors such as funding received, whether the subpartner had recent activity in fiscal year 2011, the type of activity implemented, and other strategic factors. These subpartners are not generalizable to the population, but provided additional program context and examples for the purposes of our review. Table 5 provides additional information on the partners in our sample. To examine USAID’s and State’s monitoring of the use of program funds, we reviewed and analyzed performance and financial documentation and data and conducted interviews with USAID and State officials, as well as representatives of partners and subpartners in our sample. In addition, we conducted fieldwork in Miami, Florida—where we interviewed representatives and reviewed documentation at local partners and subpartners—and at the U.S. Interests Section (USINT) in Havana, Cuba—where we interviewed U.S. officials and partner representatives, and observed WHA-funded democracy assistance activities at post. We examined USAID’s and State’s program operational plans and performance progress reports; agency and partner policy and procedure manuals and program guidance; agendas and information presented at quarterly partner meetings; partner and subpartner award agreements and contracts; partner implementation, monitoring and evaluation (M/E) plans; partner and subpartner interim and final performance and financial reports; and audits of partner activities, among other documents and data. To review partners’ M/E plans, we assessed whether each plan had three of the basic elements of M/E plans, as described in USAID and State guidance: (1) clearly defined indicators, (2) targets set for each indicator, and (3) data collection methods specified for each indicator. We selected these criteria since they were common elements that M/E plans should have, according to USAID and State guidance. Because we focus on assessing the agencies’ and partners’ abilities to monitor, not to evaluate, the Cuba democracy program, we did not select criteria for assessing any portions of the M/E plans related to evaluation. We analyzed each M/E plan to determine the extent to which the plan incorporated each element. For example, we determined that an M/E plan partially met the criterion for clearly defined indicators if the plan had indicators but did not provide definitions for the indicators. For targets, we assessed a partner’s M/E plan as having partially met the criterion if there were relevant targets specified for some but not all indicators. For data collection methods, we assessed the partner’s M/E plan as having partially met the criterion if data collection methods were only described for some indicators or if the plan generally described data collection methods, but did specify which methods pertained to each indicator. To review partners’ progress reports, we assessed whether progress was clearly reported against each indicator identified in the M/E plans. We determined that a partner’s progress report met this criterion if the report included specific updates on progress for each indicator, with any progress for quantitative indicators (e.g., number of beneficiaries) reported in numeric form. We assessed a partner’s progress report as partially having met the criterion if it reported progress in this specific way on some but not all indicators. We assessed a partner’s progress report as not having met the criterion if the partner did not clearly report progress on any of its indicators. Additionally, we interviewed officials from the two organizations contracted by USAID/LAC to conduct performance and financial reviews of partners. We reviewed partner and subpartner internal controls and related residual fiscal accountability risk, and also performed walk- throughs of their disbursement processes and reviewed invoices and other supporting documentation. We primarily focused our review on compliance with internal controls standards relating to monitoring of program funds and to reviewing certain control activities. We performed selected expenditure testing at each partner and subpartner in our sample, when applicable, to identify potential internal controls or financial management issues. We also reviewed our previous reports and interviewed experts to identify lessons learned and to better understand challenges related to providing democracy assistance for Cuba. We conducted this performance audit from September 2011 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Leslie Holen, Assistant Director; Elisabeth Helmer, Heather Latta, Joshua Akery, Laura Bednar, Beryl Davis, David Dayton, David Dornisch, Ernie Jackson, Crystal Lazcano, John Lopez, Reid Lowe, and Kim McGatlin provided significant contributions to the work. Etana Finkler and Jeremy Sebest provided technical assistance.
Since 1996, Congress has appropriated $205 million to USAID and State to support democracy assistance for Cuba. Because of Cuban government restrictions, conditions in Cuba pose security risks to the implementing partners—primarily NGOs—and subpartners that provide U.S. assistance. For this report, GAO (1) identified current assistance, implementing partners, subpartners, and beneficiaries; (2) reviewed USAID’s and State’s efforts to implement the program in accordance with U.S. laws and regulations and to address program risks; and (3) examined USAID’s and State’s monitoring of the use of program funds. This report is a publicly releasable version of a Sensitive But Unclassified Report that GAO issued in December 2012. The U.S. Agency for International Development (USAID) and Department of State (State) provide democracy assistance for Cuba aimed at developing civil society and promoting freedom of information. Typical program beneficiaries include Cuban community leaders, independent journalists, women, youths, and marginalized groups. USAID receives the majority of funding allocated for this assistance, although State has received 32 percent of funding since 2004. In recent years, both USAID and State have provided more funding for program implementation to for-profit and nongovernmental organizations (NGO) with a worldwide or regional focus than to universities and to NGOs that focus only on Cuba. All types of implementing partners, but worldwide or regional organizations in particular, used subpartners to implement program activities under 21 of the 29 awards and contracts that GAO reviewed. USAID and State legal officials view the Cuba democracy program’s authorizing legislation as allowing the agencies discretion in determining the types of activities that can be funded with program assistance. Agency officials added that the agencies ensure that program activities directly relate to democracy promotion as broadly illustrated in related program legislation. The officials stated that organizations are expected to work with agency program officers to determine what activities are permitted or appropriate. In addition, they said that program partners and subpartners are expected to spend U.S. government funds consistent with U.S. laws, and that requirements in primary award agreements generally flow down to any subpartners. USAID has improved its performance and financial monitoring of implementing partners’ use of program funds by implementing new policies and hiring contractors to improve monitoring and evaluation and to conduct financial internal controls reviews, but GAO found gaps in State’s financial monitoring. While GAO found some gaps in implementing partners’ performance planning and reporting, both agencies are taking steps to improve performance monitoring. For financial monitoring, USAID performs financial internal controls reviews of its implementing partners with the assistance of an external auditor. Since 2008, USAID has used a risk-based approach to determine the coverage and frequency of the 30 reviews the auditor has conducted, which have identified weaknesses in implementing partners’ financial management, procurement, and internal controls. However, because of resource constraints, State did not perform financial internal controls reviews for more than two-thirds of its implementing partners during fiscal years 2010 through 2012. State procured an external financial auditor in September 2012 that plans to review more than half of State’s implementing partners, and has taken steps toward implementing a risk-based approach for scheduling these reviews. Federal regulations generally require agencies to approve the use of subpartners. GAO found that USAID issued specific guidance in 2011 to its implementing partners on requirements for subpartner approval. While State told GAO it has similar requirements, State’s requirements are not clearly specified in its written guidance. As a result, State was not provided with the information it would have needed to approve at least 91 subawards and subcontracts that were obligated under eight awards. GAO is recommending that State take steps to improve its financial monitoring of implementing partners and provide clear guidance for approving subpartners. State concurred with GAO’s recommendations and cited steps they are taking to address them.
DOD is a massive and complex organization entrusted with more taxpayer dollars than any other federal department or agency. Organizationally, the department includes the Office of the Secretary of Defense, the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that are responsible for either specific geographic regions or specific functions. (See fig. 1 for a simplified depiction of DOD’s organizational structure.) In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is overly complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. Moreover, the department’s nonintegrated and duplicative systems impair its ability to combat fraud, waste, and abuse. The department recently reported that this systems environment is composed of approximately 2,480 separate business systems. For fiscal year 2010, DOD requested about $15.5 billion in funds to operate, maintain, and modernize its business systems and associated information technology (IT) infrastructure. DOD currently bears responsibility, in whole or in part, for 15 of the 31 programs across the federal government that we have designated as high risk because they are highly susceptible to fraud, waste, abuse, and mismanagement. Eight of these areas are specific to the department, and 7 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas relate to DOD’s major business operations that are inextricably linked to the department’s ability to perform its overall mission, directly affect the readiness and capabilities of U.S. military forces, and can affect the success of a mission. DOD’s business systems modernization is one of the high-risk areas, and it is an essential enabler to addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. The National Defense Authorization Act (NDAA) for Fiscal Year 2008 designates the Deputy Secretary of Defense as the Chief Management Officer (CMO) for DOD and created a Deputy CMO position. The CMO’s responsibilities include developing and maintaining a departmentwide strategic plan for business reform and establishing performance goals and measures for improving and evaluating overall economy, efficiency, and effectiveness and monitoring and measuring the progress of the department. The Deputy CMO’s responsibilities include recommending to the CMO methodologies and measurement criteria to better synchronize, integrate, and coordinate the business operations to ensure alignment in support of the warfighting mission. The Business Transformation Agency (BTA) supports the Deputy CMO in leading and coordinating business transformation efforts across the department. This includes maintaining and updating the department’s enterprise architecture for its business mission area. The CMO and Deputy CMO are to interact with several entities to guide the direction, oversight, and execution of DOD’s business transformation efforts, which include business systems modernization. These entities include the Defense Business Systems Management Committee (DBSMC), which serves as the highest-ranking investment review and decision- making body for business systems modernization activities and is chaired by the Deputy Secretary of Defense; the principal staff assistants, who serve as the certification authorities for business system modernizations in their respective core business missions; the investment review boards (IRB), which are chaired by the certifying authorities and form the review and decision-making bodies for business system investments in their respective areas of responsibility; and the BTA, which supports IRBs and leads and coordinates business transformation efforts across the department. (Table 1 lists these entities and provides greater detail on their roles, responsibilities, and composition.) Since 2005, DOD has employed a “tiered accountability” approach to business systems modernization. Under this approach, responsibility and accountability for business architectures and systems investment management are assigned to different levels in the organization. For example, the BTA is responsible for developing the corporate BEA (i.e., the thin layer of DOD-wide policies, capabilities, standards, and rules) and the associated enterprise transition plan (ETP). Each component is responsible for defining a component-level architecture and transition plan associated with its own tiers of responsibility and for doing so in a manner that is aligned with (i.e., does not violate) the corporate BEA. Similarly, program managers are responsible for developing program-level architectures and plans and for ensuring alignment with the architectures and transition plans above them. This concept is to allow for autonomy while also ensuring linkages and alignment from the program level through the component level to the corporate level. (Table 2 describes the four investment tiers and identifies the associated reviewing and approving entities.) Consistent with the tiered accountability approach, the NDAA for Fiscal Year 2008 required the secretaries of the military departments to designate the department under secretaries as CMOs with primary responsibility for business operations. Moreover, the Duncan Hunter NDAA for Fiscal Year 2009 required the military departments to establish business transformation offices to assist their CMOs. Congress included provisions in the NDAA for Fiscal Year 2005 that are aimed at ensuring DOD’s establishment and implementation of effective investment management structures and processes. According to the act, DOD is required to develop a BEA; develop an ETP for implementing the architecture; identify each business system proposed for funding in DOD’s fiscal year budget submissions; delegate the responsibility for business systems to designated approval authorities within the Office of the Secretary of Defense; require each approval authority to establish investment review structures and processes; and, effective October 1, 2005, not obligate appropriated funds for a defense business system modernization with a total cost of more than $1 million unless the approval authority certifies that the business system modernization meets several conditions. The NDAA for Fiscal Year 2005 also requires that the Secretary of Defense submit to congressional defense committees an annual report on the department’s compliance with the act’s provisions. This report is to 1. describe actions taken and planned for meeting the act’s requirements, a) specific milestones and actual performance against specified performance measures and any revision of such milestones and performance measures and b) specific actions on the defense business system modernizations submitted for certification under such subsection; 2. discuss specific improvements in business operations and cost savings resulting from successful defense business systems modernization efforts; 3. identify the number of defense business system modernizations certified; and 4. identify any defense business system modernization with an obligation in excess of $1 million during the preceding fiscal year that was not certified as required, and the reasons for the waiver. Between 2005 and 2008, we reported that DOD had taken increasing steps to comply with key requirements of the NDAA for Fiscal Year 2005 relative to architecture development, transition plan development, budgetary disclosure, and investment review and to satisfy relevant systems modernization management guidance, but that much remained to be accomplished relative to the act’s requirements and relevant guidance. Nevertheless, we concluded that the department had made important progress in defining and beginning to implement institutional management controls (i.e., processes, structures, and tools). Notwithstanding this progress, in May 2009, we reported that the pace of DOD’s efforts in defining and implementing key institutional modernization management controls had slowed compared with progress made in each of the last 4 years, leaving much to be accomplished to fully implement the act’s requirements and related guidance. For example: The corporate BEA had yet to be extended (i.e., federated) to the entire family of business mission area architectures, including using an independent verification and validation agent to assess the components’ subsidiary architectures and federation efforts. The fiscal year 2009 budget submission included some, but omitted other key information about business system investments, in part because of the lack of a reliable, comprehensive inventory of all defense business systems. IT investment management policies and procedures at the corporate and component levels were not fully defined. The business system information used to support the development of the transition plan and DOD’s budget requests, as well as certification and annual reviews, was of questionable reliability. Business system investments costing more than $1 million continue to be certified and approved, but these decisions were not always based on complete information. Accordingly, we reiterated existing recommendations to address each of these areas and further recommended that DOD, among other things, improve the quality of investment-related information. DOD partially agreed with our recommendations and described actions being planned or under way to address them. DOD is currently in the process of addressing these recommendations. The act requires that DOD’s report describe milestones for business system modernization programs and actual performance against performance measures. In addition, the act requires that the report specify any revisions to milestones and performance measures. To its credit, DOD’s annual report includes milestones, performance against milestones, and milestone revisions for 76 programs. However, other important performances measures, which typically include measures associated with determining progress against program cost, capability, and benefit commitments, are not included in the report. BTA officials cited various reasons for the scope and content of the information provided and not provided, but these reasons are at odds with other aspects of its report. Without including information on program performance against, and revisions to, such key measures as cost, capability, and benefit commitments, DOD is not providing Congress with the information needed to inform its oversight of business system modernization programs. Consistent with the act’s requirement that DOD report on specific milestones and revisions to the milestones, DOD’s March 2010 report includes a summary of the status of milestones that were to be completed during fiscal 2009, the revisions associated with these milestones (e.g. delayed or deleted), and the reason for the revision. Specifically, the report lists three categories of milestones: Standard acquisition milestones: key events and dates that are provided for under DOD’s system acquisition process. BEA compliance milestones: time frames for addressing specific IRB certification conditions related to ensuring BEA compliance. Interim milestones: key events and dates to supplement DOD’s system acquisition process milestones (e.g., implementing specific system capabilities or upgrading infrastructure by a given date). DOD’s report includes a total of 224 milestones that collectively span 76 programs. Of these 224 milestones, 35 are standard acquisition milestones, 22 are BEA compliance milestones, and 167 are interim milestones. The report also discusses performance against these milestones. Specifically, of the 224 milestones, 56 percent are reported to have been met, while 21 and 23 percent are reported to have been deleted (i.e., determined to be unnecessary) or not met, respectively. (See fig 2.) With respect to acquisition and compliance milestones, the percentage of milestones that were reported as not being met was 66 and 50 percent, respectively. Beyond milestones, the act requires DOD’s annual report to address actual performance against performance measures and any revision of these performance measures. As we have previously reported, meaningful information about program performance is typically measured in terms of program cost, capability, and benefit commitments, in addition to schedule commitments. Through such a range of performance measures, valuable insight into the health and success of a business system investment can be gained. As we previously discussed in this report, DOD’s annual report does report schedule commitments (i.e., milestones) for its modernizing programs. While DOD’s annual report also includes examples of business improvements and costs savings, this program performance information is not reported against performance measure baselines. Further, the report remains silent with respect to other important performance measures such as progress made against cost and capability commitments, which would allow congressional decision makers to understand the extent to which programs are meeting cost, capability, and benefit commitments. BTA officials stated they focused the annual report on programs that had planned milestones during fiscal year 2009. Further, they said they focused on program milestones because most of the investments covered by the report have not progressed far enough in their life cycles to measure cost, capability, and benefit performance. In addition, the annual report states that DOD does not include performance measures in its annual report for any system that has reached initial or full operational capability and is no longer modernizing. Nevertheless, the report includes descriptions of a number of programs that have progressed to the point where DOD reports on actual operational efficiencies and dollar savings that have accrued, which, in turn, means that these programs have progressed to the point that DOD can report on progress against defined performance commitments, such as the costs that have been incurred, the capabilities that have been delivered, and benefits that have been realized. Moreover, the programs that have not yet delivered capabilities or realized benefits have incurred costs, which DOD can report relative to expected costs. Establishing performance measures and monitoring actual-versus- expected performance using the full range of measures are essential to understanding the health of any IT investment. By not including information on each program’s performance against defined cost, capability, and benefit commitments, DOD is not providing Congress with important information for informing its oversight of business system modernization investments. The act requires that DOD’s annual report discuss specific improvements in business operations and cost savings resulting from successful business system modernization programs. DOD met this requirement by including 18 “case in point” examples in the report. Among other things, each narrative generally describes the program and provides high-level information on system capabilities delivered and benefits achieved to date. Specific examples include: The Air Force Recruiting Information Support System is to be the primary Web-based recruiting system for the Air Force. According to the annual report, the Air Force’s legacy recruiting information system has been slow and, at times, unavailable to users. Modernization of the system (e.g., new hardware and software upgrades) is reported to have improved recruiters’ productivity by reducing the wait time for processing recruiter requests. Other reported improvements include allowing recruiters to build applicant files off line and upload them at a later time and reducing the recruiter’s dependence on Internet connectivity. Future plans include merging the Air Force Recruiting Information Support System with the Air Force Reserve Recruiting System to increase functionality and decrease system response time. The Army Learning Management System is a Web-based system for training, scheduling, and career planning for soldiers. According to the annual report, in 2009 the system contributed to an 88 percent increase in the number of student accounts, 111 percent increase in the number of courses offered, and 157 percent increase in the number of course completions. Wide Area Work Flow is a Web-based system to centralize and automate DOD’s largely manual business payment process. The annual report states that the system has thus far allowed the cost of processing a payment to decrease from between $22 and $30 to between $6 and $12. Other cited benefits include allowing suppliers to have a single point of interface with DOD for payment invoicing, receipt, and acceptance. The Defense Agencies Initiative is an enterprise resource planning (ERP) system to standardize and integrate enterprise data to support financial decisions in real time. According to the annual report, the system resulted in a reduction in the time it takes to post financial obligations from 60 days to less than 2 days, and a reduction in the time to close out monthly financial reports from 4 days to less than 1 day. Further, the report states that financial information is now available to BTA on a real- time basis and thereby enables proactive management of agency finances. Among other things, the act requires DOD’s annual report to describe specific actions the department has taken on each business system modernization investment submitted for certification. More specifically, the act states that such investments involving more than $1 million in obligations must be certified by a designated approval authority as meeting specific criteria, such as demonstrating compliance with DOD’s BEA. Further, the act requires the DBSMC to approve each of these certifications. To its credit, DOD’s annual report identifies IRB certification actions associated with 116 business system investments. However, certification actions associated with 40 other investments are not included. Further, the bases for several of the fiscal year 2009 system certification actions and subsequent approvals are limited because program weaknesses and issues that our prior work has raised about, for example, the systems’ economic analyses and BEA compliance determinations, are not reflected in the reported certification actions. According to BTA officials, only new certifications are included in the report, even though DOD guidance states that recertifications are also certification actions. Moreover, this guidance does not require programs to disclose program weaknesses and issues raised by us or others. By not fully identifying in its annual report the certification actions taken on all business system modernization investments, DOD is not fully informing congressional oversight. Further, by not ensuring that all certifications reflect known program weaknesses, business system modernization program certification and approval decisions are not being fully informed and thus may not be adequately justified. DOD has established what it describes as a “tiered accountability” approach to meeting the act’s requirements for certifying business system investments. Under this approach, investment review begins within the military departments and defense agencies and advances through a hierarchy of review and decision-making authorities, depending on the size, nature, and significance of the investment. For those investments that meet the act’s dollar threshold, namely those with planned modernizations in excess of $1 million, this sequence of review and decision making includes component precertification, IRB certification, and DBMSC approval. For those investments that do not meet the dollar threshold, investment decision-making authority remains with the responsible component. According to the department’s approach, reviews for modernization investments of more than $1 million focus on program alignment with the BEA; alignment with the department’s strategic mission, goals, and objectives; and oversight commensurate with the program’s cost, scope, and complexity. The approach further requires that these reviews be completed before a component can obligate modernization funds. At the component level, program managers are responsible for the information about their respective programs in a central repository known as the Defense IT Portfolio Repository system (DITPR). The component precertification authority is responsible for precertifying that a given system investment is compliant with the BEA, reviewing the system funding requests, and ensuring that the IRB responsible for the investment receives complete, current, accurate, and timely information. The precertification authority is also responsible for “asserting” the status and validity of the investment information by submitting a component precertification letter to the responsible IRB. At the corporate level, an IRB reviews the precertification letter and related material and makes a recommendation for a specific certification action for each of its investments. After the IRB makes its recommendation, it prepares a certification memorandum that documents the IRB’s decisions and any related conditions. The memorandum is forwarded to the DBSMC, which either approves or disapproves the IRB’s decision and issues a memorandum containing its decision. If the DBSMC disapproves a system investment’s certification, it is up to the component’s precertification authority to decide whether or not to resubmit the investment after it has resolved the reasons for the disapproval. Under DOD’s approach, there are four types of certification actions: Certify: An IRB certifies the modernization as fully meeting criteria defined in the act and IRB investment review guidance, such as compliance with the BEA and the extent to which the investment is consistent with component and department IT investment portfolios, which are asserted by the component precertification authority. Certify with conditions: An IRB certifies the modernization with the understanding that it will address specific IRB-imposed conditions. For example, the Army’s Real Estate Management Information System was certified with a condition to provide a plan for how data elements would comply with certain business rules in DOD’s BEA. Recertify: An IRB certifies the obligation of additional modernization funds for a previously certified modernization investment. For example, the Air Force’s Cargo Movement Operations System was recertified in April 2009 for $6.3 million to be spent in fiscal years 2009 through 2012. This recertification was in addition to the $21.1 million previously certified in fiscal year 2007. In addition, a program must request IRB recertification if the program plans to redistribute previously approved modernization funds among multiple fiscal years and this redistribution will result in the funding for any given fiscal year exceeding the previously approved amount by 10 percent or more. Decertify: An IRB reduces the amount of modernization funds available to an investment when the entire amount of funding is not to be obligated as previously certified. For example, the Defense Financial Accounting Service’s Standard Disbursing Initiative had about $5.5 million decertified because the funds were no longer needed. An IRB may also decertify a modernization after it has been completed. For example, DOD reported that $213,000 for the Air National Guard’s Reserve Order Writing System was decertified at the time the system was completed because the funds were no longer needed. The act requires that DOD’s annual report describe specific actions taken for each business system investment submitted for certification. However, the department’s annual report discusses fiscal year 2009 certification actions on only 116 of the 156 systems on which certification actions were taken. More specifically, the report states that during fiscal year 2009, 92 business system modernizations were certified—32 with and 60 without conditions. For the 32 systems, 58 conditions were collectively reported. Examples of conditions cited in the report are the need for a system to comply with the Standard Financial Information Structure and the need to develop a plan for complying with the data standards of DOD’s Item Unique Identifier Registry. The report also identifies 24 decertifications. For example, the Air Force’s Enhanced Technical Information Management System had about $13.9 million in funding decertified (i.e., reduced), and the Defense Financial Accounting Service’s Standard Disbursing Initiative had about $5.5 million decertified. However, fiscal year 2009 IRB and DBSMC decision memoranda and meeting minutes show that 40 systems had additional certification actions that were not included in DOD’s annual report. Of these 40 systems, 2 were certified without conditions, 2 were certified with conditions, and 36 were recertified. Collectively, DOD’s annual report omits about 26 percent of its certification actions. (See table 3 for a summary of actual, reported, and unreported certification actions.) According to BTA officials, the excluded certification actions are all recertifications, which they said are intentionally not reported because they are not new certifications. They also told us that the four new certifications were, in fact, recertifications. However, DBSMC and IRB memoranda and meeting minutes identify these four certification actions as new certifications. Moreover, DOD guidance defines a recertification as a type of certification action, thus making it subject to the act’s reporting requirements. Without complete reporting of its certification actions, DOD is not in full compliance with its guidance, and DOD is limiting congressional visibility into the full scope of its business systems modernization efforts. According to DOD guidance, IRB certification addresses, among other things, a program’s alignment with the BEA and its management relative to factors such as system cost, scope, and complexity. To make a certification decision, IRBs rely on documentation submitted by the component precertification authority, including a certification dashboard, which includes cost and schedule status information; an economic viability analysis, which addresses the investment’s cost and benefits or cost effectiveness; and regulatory and standards compliance determinations. DOD guidance also gives IRBs broad authority in their certification reviews and actions, thus allowing each board to review and consider whatever investment-related information that it deems appropriate. Moreover, BTA and IRB officials told us that an IRB is not limited in the conditions it can place on a program. The IRB certification actions described in DOD’s latest annual report are limited because they do not reflect significant limitations in the department’s basis for determining an investment’s alignment to the BEA. Specifically, we recently reported that key DOD BEA compliance assessments did not include all relevant architecture products, such as products that specify the technical standards needed to promote interoperability among related systems or examine overlaps with other business systems. In addition, we reported that these compliance assessments were not validated by DOD certification and approval entities. Despite these limitations, business systems modernization programs were certified as compliant with the BEA even though they did not adequately demonstrate such compliance. Accordingly, we recommended that DOD revise its BEA compliance guidance, tool, and IRB verification requirements to address these shortfalls. To date, DOD has yet to implement these recommendations, and thus the compliance determination weaknesses remain. Despite this, DOD’s latest annual report does not disclose these limitations on any of the 116 investment certification actions that it describes. In addition, the fiscal year 2009 IRB certification actions described in the latest annual report are further limited in that they do not reflect weaknesses we have recently reported with the economic justification for and management of certain programs. For example, We reported in September 2009 that the Defense Readiness Reporting System (DRRS) program was not being effectively managed and made recommendations to address a number of acquisition management weaknesses including the absence of a reliable integrated master schedule and well defined and managed requirements, and adequate testing. As stated in our report, we briefed the DRRS program office on the results of our work prior to its DOD Human Resources Management IRB certification review. However, these results were not disclosed to the IRB. Rather, the certification package that the precertification authority submitted to the IRB stated that DRRS was on track for meeting its cost, schedule, and performance measures and highlighted no program risks. Based on this submission, DRRS was certified by the IRB and approved by the DBSMC to obligate $24.625 million in fiscal years 2009 and 2010. According to the chair of the IRB, the board did not validate the information in the submissions it received, and the results of our review were not disclosed to the IRB. We reported in July 2008 that the Global Combat Support System-Marine Corps (GCSS-MC) program had not been economically justified on the basis of reliable estimates of both benefits and costs and that key program management controls, such as the use of earned value management and risk management had not been properly implemented. Accordingly, we made recommendations to address these weaknesses. GCSS-MC was certified with conditions and recertified during fiscal year 2009. Neither the weaknesses that we previously reported nor the status of our recommendations to address them were evident in the conditions accompanying the certification, even though our recommendations had yet to be implemented at the time of these certification actions. We reported in September 2008 that the Navy ERP program did not use important cost estimating practices when economically justifying the program, did not implement key aspects of earned value management, and did not have risk mitigation strategies in place to address the risks described in our report, including risks associated with these issues. Accordingly, we made recommendations to address these weaknesses. However, when Navy ERP was recertified during fiscal year 2009, conditions relative to any of these weaknesses did not accompany the recertification, although our recommendations had yet to be implemented at the time of recertification. Officials representing each of the IRBs stated that the boards depend on the component precertification authorities to provide them with complete and reliable information about each system investment. Among other things, IRB officials stated that such information should include the results of reviews by us and others. However, DOD guidance does not state that GAO-related information, such as open recommendations or the focus and results of our ongoing reviews, is to be included in the certification packages provided to the IRBs. Further, the Special Assistant to the Deputy Chief Management Officer told us it is each program’s milestone decision authority that is ultimately responsible for addressing known program management issues, including those raised by GAO. By not having and considering relevant information about the state of each system modernization investment certified and approved, such as the results of our reviews and the status of actions to implement our recommendations that pertain to the investment, DOD’s certification and approval decisions are based on limited information, and thus may not be justified. The act requires DOD to identify in its annual report any defense business system modernization with an obligation in excess of $1 million during the preceding fiscal year that was not certified and approved according to the act’s provisions, along with any reasons for these requirements being waived. According to DOD’s latest annual report, system investments were certified according to the act’s requirements during fiscal year 2009, and no systems were granted a certification waiver. Similarly, each of DOD’s annual reports since March of 2006 has stated that no systems were approved on the basis of a certification waiver. According to officials representing each of the IRBs, while program officials sometimes seek to be certified on the basis of a waiver, their practice is to ensure that the program office addresses any issues underlying a waiver request before the investment is placed on an IRB’s certification review agenda. As a result, they stated that a system is unlikely to go before an IRB for certification until it can be certified with conditions. DOD’s latest annual report on its business systems modernization program complies with statutory requirements pertaining to the report’s content, but the scope and completeness of key information that is provided in the report is otherwise limited. In particular, the report omits information on numerous business system investment certification actions taken during fiscal year 2009. In addition, while it includes schedule-focused performance measures and performance against these measures for the modernization investments discussed, as required by statute, it does not include similar information for other performance measures, such as cost, capability, and benefit commitments and performance against these commitments. Collectively, this means that DOD’s annual report does not provide congressional committees with the full range of information necessary to permit meaningful and informed oversight of DOD’s business systems modernization program. Beyond the scope and content of DOD’s annual report, the basis for the IRB certifications have been limited because DOD guidance does not provide for disclosure of our findings concerning investments being considered. In particular, investments have been certified and approved without conditions even though our prior reports have identified program weaknesses that were unresolved at the time of certification and approval. As a result, these certification and approval decisions may not be sufficiently justified. To facilitate congressional oversight and promote departmental accountability, we recommend that the Secretary of Defense direct the Deputy Secretary of Defense, as the chair of the DBSMC, to ensure that the scope and content of future DOD annual reports to Congress on compliance with section 332 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, as amended, be expanded to include: Cost, capability, and benefit performance measures for each business system modernization investment and actual performance against these measures. All certification actions, as defined in DOD guidance, which were taken in the previous year by the department on its business system modernization investments. To ensure that IRB certification actions are better informed and justified, we further recommend that the Secretary direct the Deputy Secretary to ensure that DOD guidance be revised to include provisions that require IRB certification submissions disclose program weaknesses raised by GAO and the status of actions to address our recommendations to correct the weaknesses. In written comments on a draft of this report, signed by the Assistant Deputy Chief Management Officer and reprinted in appendix II, the department agreed with our recommendations. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; and the Secretary of Defense. This report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. As agreed with congressional defense committees, our objective was to assess the actions by the Department of Defense (DOD) to comply with the requirements of key aspects of section 332 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (the act). To address this, we focused on the extent to which DOD’s annual report to Congress addressed the following provisions of the act: (1) describe milestones and actual performance against specified measures and any revisions, (2) discuss specific improvements in business operations and cost savings resulting from successful business system modernization efforts, (3) describe specific actions on each business system investment submitted for certification, and (4) identify any business system investment with an obligation in excess of $1 million that was not certified during the preceding fiscal year and reasons for the waiver. Our methodology relative to each of the four requirements is as follows: To determine whether the DOD annual report described milestones and actual performance against specified measures and any revisions, we compared information contained in the annual report to what the act required. Further, we compared the types of measures included in the annual report to those commonly associated with program performance, such as those described in prior GAO work related to performance measures. In addition, we interviewed officials from the Business Transformation Agency (BTA) and each of DOD’s investment review boards (IRB) to understand the process used to identify and track milestones and other performance measures. We did not independently validate the accuracy of the milestone dates included in the report. To determine the extent to which DOD’s annual report discussed specific improvements in business operations and cost savings, we reviewed each of the 18 case-in-point narratives included in the annual report that described examples of business improvements and other benefits. We compared this information with the act’s reporting requirements to identify any variances. We did not validate the accuracy of the improvements or benefits discussed in the case-in-point narratives. To determine the extent to which DOD’s annual report identified specific actions on each business systems investment submitted for certification, we reviewed and analyzed all Defense Business Systems Management Committee (DBSMC) certification approval memoranda as well as IRB certification memoranda and IRB meeting minutes issued prior to the DBSMC’s final approval decisions for fiscal year 2009 and compared the results to those certification actions described in the annual report to identify differences. We also reviewed DOD IRB guidance to understand the types of actions related to certification of business system modernizations. For certification actions included in the DBSMC and IRB memoranda but not described in the annual report, we interviewed officials from the BTA, IRBs, the Office of the Assistant Secretary of Defense (Networks and Information Integration)/DOD Chief Information Officer (ASD(NII)/DOD CIO), and the Office of the DOD Deputy Chief Management Officer (DCMO) as to the reason for the differences. For certification actions included in the report and described in fiscal year 2009 DBSMC and IRB memoranda, we compared information about specific DOD programs from recent GAO reports to the conditions associated with certification actions described in the annual report and the DBSMC and IRB memoranda to determine whether IRBs placed certification conditions related to program weaknesses identified by GAO and whether those conditions addressed those weaknesses. In addition, we interviewed DCMO, BTA, and IRB staff to discuss conditions that were reported as part of certification actions and what is submitted to the IRBs when individual systems request certification. To determine if DOD’s annual report identified any business system investment with an obligation in excess of $1 million that was not certified during the preceding fiscal year and the reasons for any waivers granted, we reviewed DBSMC and IRB certification memoranda and compared actions taken during fiscal year 2009 to the actions described in DOD’s annual report. We also interviewed DCMO and BTA officials, as well as IRB support staff, to determine if any waivers were issued during fiscal year 2009. Finally, we reviewed DOD’s annual reports from 2005 to present to determine the extent to which these reports identify any waivers issued prior to fiscal year 2009. We conducted this performance audit at DOD offices in Arlington, Virginia, from January 2010 to May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact person named above, key contributors to this report were Carl Barden, Justin Booth, Nancy Glover, Michael Holland, Neelaxi Lakhmani (Assistant Director), Kate Nielsen, Constantine Papanastasiou, Christine San, Sylvia Shanks, Jennifer Stavros-Turner, and Adam Vodraska.
Since 1995, GAO has designated the Department of Defense's (DOD) multibillion dollar business systems modernization program as high risk, and it continues to do so today. To assist in addressing DOD's modernization challenges, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (the act) requires the department to, among other things, report specific information about business system investments, including (1) milestones and actual performance against specified measures and any revisions and (2) actions taken to certify that a modernization investment involving more than $1 million meets defined conditions before obligating funds. The act also directs GAO to review each report. As agreed, GAO focused on the fiscal year 2010 report's compliance with, among other things, these provisions of the act. To do so, GAO compared DOD's report to the act's reporting requirements, interviewed DOD officials, analyzed relevant documentation, and leveraged prior GAO reports. DOD's fiscal year 2010 report to Congress on its business systems modernization program complies with key provisions in the act, but its scope and content are nevertheless limited. Specifically, (1) The report includes milestones, performance against milestones, and milestone revisions for specific investments. However, other important performance measures, such as measures of progress against program cost, capability, and benefit commitments are not included in the report. DOD officials attributed the missing performance-related information to various factors, including that most of the investments addressed in the report have not progressed far enough in their life cycles to measure cost, capability, and benefit performance. However, the report also cites a number of investments that have produced business improvements and cost savings, and thus it follows that performance-related information about investment costs incurred, capabilities delivered, and benefits realized is available and can be reported relative to program expectations. Moreover, programs that have not yet delivered capabilities or realized benefits have nevertheless incurred costs, which DOD can report relative to expected costs. (2) The report identifies certification actions associated with 116 business system modernization investments. However, the report omits certification actions for 40 other investments. According to DOD officials, the omitted actions are not new certifications, but rather are recertifications that were intentionally excluded from the report. However, certification memoranda show this is not the case for four of the actions and DOD guidance defines a recertification as a type of certification action. Further, the underlying bases for a number of reported actions are limited because program weaknesses that GAO's prior work has raised, such as the reliability of the systems' economic analyses and the sufficiency of business enterprise architecture compliance determinations, are not reflected in the reported certification actions. DOD's guidance does not require that certification submissions disclose program weaknesses that GAO has raised, and DOD officials stated that reviews are limited to the information that is submitted. As a result, DOD's annual report does not provide the full range of information that is needed to permit meaningful and informed congressional oversight of the department's business systems modernization efforts. Moreover, the bases for some certification actions exclude relevant information about known investment weaknesses, and thus these actions may not be sufficiently justified.
The USFHP is a statutorily required component of the MHS that offers the TRICARE Prime option to eligible military beneficiaries through 6 designated providers in certain locations across the country. Over the years, the number of designated providers has dropped from 10 to 6. The 6 USFHP designated providers that currently administer the USFHP are: (1) Johns Hopkins Medical Services Co., (2) Brighton Marine Health Center, (3) Martin’s Point Health Care, (4) CHRISTUS Health, (5) Pacific Medical Centers, and (6) St. Vincents Catholic Medical Centers. (See appendix I for a comparison of USFHP designated providers in 1994 and 2014.) The reduction in designated providers has largely been due to provider consolidations. In one case, a designated provider opted to no longer participate in the program. According to DOD officials, USFHP enrollees for that designated provider were successfully transitioned to other health care programs for which they were eligible, including the TRICARE options administered by the TRICARE MCSC in that region. Some of the designated providers provide care exclusively to beneficiaries enrolled in the USFHP, while others also provide health care services to beneficiaries of other health plans and may have additional lines of business. (See appendix II for more information on the characteristics of each of the designated providers.) Each of the designated providers—and their respective service areas—are located within one of the three TRICARE regions in the United States. Figure 1 below illustrates the location of the USFHP designated providers relative to the locations of the three TRICARE regions. The six designated providers offer TRICARE Prime to eligible beneficiaries through civilian provider networks in their service areas. To receive care through the USFHP, eligible beneficiaries must enroll in the program. All beneficiaries eligible for DOD health care and who are under the age of 65, except active duty servicemembers, are eligible for USFHP enrollment.designated providers may elect to enroll in TRICARE Prime with the USFHP instead of enrolling with the MCSC. As of October 2013, approximately 134,000 beneficiaries were enrolled in the USFHP—about 3 percent of all TRICARE Prime enrollees. See figure 2 for the total number of USFHP enrollees as a percent of total TRICARE Prime enrollees, as also delineated by designated provider. The USFHP’s role within the current MHS is duplicative because it offers military beneficiaries the same TRICARE Prime benefit that is offered by the MCSCs across much of the same geographic service areas and through many of the same providers. Furthermore, the USFHP is not integrated with the rest of the MHS, and does not support DOD’s efforts to increase efficiency because it potentially diverts enrollees away from DOD’s direct care system. The USFHP’s role in offering TRICARE Prime within the context of the current MHS duplicates the role of the MCSCs in several important ways. For example, the USFHP designated providers and the MCSCs both offer the same TRICARE Prime benefit as required by law. DOD implements this uniform benefit requirement by incorporating the TRICARE Policy Manual, which includes requirements for administering the TRICARE Prime option, into its contracts with both the USFHP designated providers and the TRICARE MCSCs. As a result, beneficiaries enrolled in TRICARE Prime through either the designated providers or the MCSCs receive the same benefit and have the same cost sharing responsibilities. There is also significant overlap in the geographic service areas in which the USFHP designated providers and the MCSCs offer TRICARE Prime. As of October 1, 2013, four of the six USFHP designated providers had more than 80 percent of their service area zip codes overlapping with the See table 1 for the percent of USFHP zip MCSCs’ Prime Service Areas. codes included in areas where the MCSCs also offer TRICARE Prime. Enrollees in the USFHP must live in specific zip codes that are near one of the six designated providers. Prime benefits through the MCSC if the USFHP did not exist.designated providers with a lower degree of overlap—CHRISTUS Health and Martin’s Point Health Care—are located in areas that have fewer or no MTFs or Base Realignment and Closure sites. Therefore, while USFHP enrollees in these two specific areas would not have the same level of access to TRICARE Prime through the MCSCs, they could still use other TRICARE options that are available nationwide through the MCSCs such as TRICARE Standard, or other programs for which they may be eligible. The MCSCs currently serve over 4.5 million Prime enrollees—therefore, adding 134,000 USFHP enrollees would not appear to be a significant burden, according to MCSC officials. All three MCSCs told us that they would likely have the capacity and capability to provide TRICARE coverage to all of the current USFHP enrollees, including the TRICARE Prime option or other options, as appropriate, depending on the enrollees’ locations. Specifically, if the USFHP did not exist, two of the three MCSCs could each be responsible for fewer than 6,000 additional enrollees, and the third MCSC would potentially be responsible for approximately 79,000 enrollees, if all affected beneficiaries maintained their TRICARE coverage. However, some affected beneficiaries might choose to enroll in other health care programs for which they may be eligible. We limited our analysis to include only providers with a Doctor of Medicine or Doctor of Osteopathic Medicine degree. The inclusion of other types of providers, including nurse practitioners or physician’s assistants, could potentially affect the degree of overlap. encourage all of their providers to participate in both the USFHP and MCSC networks, while officials from another told us that they encourage their providers not to participate with the MCSC’s network. In general, providers are allowed to contract with any organization they choose; therefore, those currently serving only the USFHP might also choose to participate in the applicable MCSC’s network. The duplication and related overlap between the USFHP and the MCSCs’ TRICARE Prime option has been longstanding in part because the program’s role has not been reassessed since TRICARE was implemented in the 1990s. Many key features of the USFHP have remained in place since the enactment of the NDAA for Fiscal Year 1997, notwithstanding TRICARE’s growth during the intervening years. DOD officials told us that there is not a function that the USFHP designated providers serve that the MCSCs could not perform. However, because the USFHP is statutorily required, DOD does not have the authority to eliminate it and transition the USFHP enrollees into the regional TRICARE program managed by the MCSCs or into other programs for which they may be eligible. One of the goals of the MHS is to maximize use of the direct care system’s MTFs, a goal most recently articulated in DOD’s budget request for fiscal year 2015. TRICARE’s managed care support contracts are designed to support this goal by requiring the MCSCs to optimize the use of the direct care system as part of an integrated MHS. For example, MCSCs are to first assign Prime enrollees to a Primary Care Manager located at an MTF until its enrollment capacity has been reached, at which point, enrollees are assigned to a Primary Care Manager from the civilian provider network. MCSCs are also required to give their region’s MTFs the right of first refusal for all specialty care referrals for Prime enrollees to ensure that, if it has the capability and capacity to administer the care, the MTF has the opportunity to do so prior to referring them to the civilian network. In these ways, the MCSCs help promote the efficient use of DOD’s direct care system by helping the MTFs operate at full capacity. DOD is unable to promote the efficient use of DOD’s direct care system through the USFHP, which operates independently of the integrated MHS as a distinct statutory program. Unlike Prime beneficiaries enrolled with the MCSCs, USFHP enrollees are generally precluded from receiving care at MTFs due to the program’s fixed-price capitation payment structure that is intended to cover all enrollees’ health care costs. Given the extent of overlap with the MCSCs’ Prime Service Areas, which are generally around MTFs, thousands of USFHP enrollees are precluded from using the direct care system. To limit any additional future impact of the USFHP on MTFs, DOD has denied designated providers’ requests for service area expansions. For example, in 2012, a designated provider submitted a request to DOD to expand its existing service area to a location that also included several MTFs. However, DOD denied this request, citing duplication with the existing MCSC network and a potentially higher health care cost per beneficiary to provide care through the USFHP since its enrollees would not be able to use the local MTFs. As an exception, a USFHP enrollee may access an MTF for emergency services, and the designated provider is then contractually responsible for reimbursing the MTF for that care. In addition, USFHP designated providers may negotiate Memorandums of Agreement with the MTFs for the purpose of referring enrollees to them on the condition that the designated providers would be responsible to the MTFs for the associated health care costs of any referred beneficiaries. During the course of our review, however, we found that no Memorandums of Agreement had been established for this purpose, although one designated provider told us that it was in the process of negotiating one. Certain features of the USFHP give designated providers a competitive advantage over the MCSCs—one that likely results in increased costs instead of the lower costs that typically result from competition. For example, the USFHP designated providers offer certain discounts and services that go beyond TRICARE’s uniform benefits, such as dental and vision care discounts. (See appendix III for a description of these discounts and services.) DOD officials acknowledged that the MCSCs’ Prime enrollees do not receive these discounts and services because MCSCs are paid on a cost reimbursement basis only for allowable TRICARE Prime benefits. DOD officials acknowledged that these discounts and services may not be entirely consistent with TRICARE’s uniform benefit requirement, but they have not taken issue with this practice because the USFHP designated providers receive a fixed- capitation payment, and therefore, DOD is not billed separately for any discounts and services that the designated providers choose to offer. Although the USFHP designated providers do not bill DOD for these discounts and services, it is not known how the designated providers account for these discounts and services in developing their proposed capitation payments. The USFHP designated providers promote these discounts and services in their marketing materials, which may lead to beneficiaries choosing to enroll with the USFHP designated providers over the MCSCs in areas where both programs are offered. MTF officials expressed concern that the USFHP diverts beneficiaries away from the MCSCs and, correspondingly, from the direct care system. MTFs rely on beneficiaries as a source of education and training opportunities. MTF officials we spoke with expressed concerns that the USFHP program detracts from the volume and complexity of cases needed to maintain a robust graduate medical education and skills training program. For example, a military service medical official noted that medical facilities with Magnetic Resonance Imaging capabilities seek to maximize the use of this equipment and strive to offer their providers opportunities for related specialty care training while treating enrollees needing this type of service. However, if the MTF is located near a USFHP, this official noted that such training opportunities may be more limited because any USFHP enrollees who need these services would not obtain them from their local MTF. In addition, since the USFHP contracts are full-risk capitated arrangements, the designated providers are not limited to the TRICARE provider reimbursement rates, and thus have the flexibility to offer their providers higher rates or other incentives. In contrast, MCSCs’ network providers must accept TRICARE maximum allowable charges as payment in full, and these rates are generally based on Medicare rates. The increased flexibility in setting reimbursement rates may provide the USFHP designated providers a competitive advantage over the MCSCs in recruiting providers to their networks. The USFHP designated providers told us that they exceed the TRICARE provider reimbursement rates on occasion, depending on their market areas, although some generally use the TRICARE rates as guidelines. DOD officials told us of an example where the USFHP designated provider recruited a primary care provider group, which resulted in the group leaving the MCSC network, and these officials were concerned that reimbursement rates may have been a factor in this decision. Consequently, beneficiaries who want to obtain care from this provider group may choose to enroll with the USFHP instead of enrolling with the MCSC. By maintaining exclusive relationships with providers—especially those that are highly desired by beneficiaries—this designated provider has a competitive advantage over the MCSC, potentially drawing beneficiaries away from and limiting the use of the direct care system. The provision of additional discounts and services to USFHP enrollees or exclusive provider arrangements may contribute to the USFHP’s high satisfaction rates, which the designated providers point to as a measure of the program’s success. While high satisfaction rates may be viewed as an argument in favor of maintaining the USHFP as a separate program, doing so comes at the expense of an integrated MHS that optimizes the use of MTFs. Because the USFHP’s role of offering TRICARE Prime is duplicative of the role played by the MCSCs, DOD has incurred added costs by paying the designated providers to simultaneously administer the same benefit as the MCSCs. Consequently, due to its duplicative role, managing the USFHP is also an inefficient use of DOD’s resources. Because the USFHP is a statutorily required component of the MHS, DOD must pay the USFHP designated providers to administer the same benefit to the same population of eligible beneficiaries in many of the same locations as the MCSCs. Although DOD would incur health care costs for the USFHP enrollees regardless of with whom they are enrolled, DOD must also pay administrative costs and profits to two different groups of contractors for providing the same TRICARE Prime benefit. Currently, administrative costs and profit margins are part of the negotiated payments for the USFHP contracts. We obtained a breakdown of the average capitation payments DOD made to the designated providers for USFHP enrollees for fiscal year 2013, and we estimated that the administrative costs and profit margins were approximately $27 million of the total cost of the program for that year (2.4 percent of $1.1 billion). However, our estimate may not represent the total administrative costs and profit realized since DOD’s knowledge of the actual cost components underlying these negotiated payments is limited.requirement to share certified cost or pricing data, DOD has requested that they share uncertified cost or pricing data. However, according to DOD officials, the designated providers have been unwilling to do so when asked. Thus, DOD does not know how much of the approximately $1.1 billion it pays the USFHP designated providers annually goes toward their actual administrative costs and profit versus the cost of health care services for USFHP enrollees. Furthermore, it is also unknown how the actual administrative costs and profit of the designated providers compare to the MCSCs. Although the designated providers are exempt from any In 2010, DOD hired a consultant to review its methodology for setting capitation rates for the designated providers. Overall, the consultant found that DOD’s process for establishing capitation rates was actuarially sound and that the rates were generally being set in accordance with current legislation. Nonetheless, the consultant did report some concerns with the rates—and thus, the cost-effectiveness of the USFHP—since the rates may not reflect the designated providers’ actual costs of delivering health care. More specifically, the consultant stated that the health care costs used by the designated providers in negotiating the capitation payment rates do not accurately represent their true cost of providing benefits under the program and offered several recommendations to improve the cost-effectiveness of the capitation rate setting process. Additionally, we found that the program likely provides a significant source of income for the designated providers as more than half of them rely heavily on the USFHP for their business. Specifically, USFHP enrollees comprise 100 percent of the total beneficiaries enrolled in two of the six designated providers and more than 60 percent for two others. The costs associated with the USFHP designated provider contracts are not the only costs DOD incurs for this program. DOD also has a data support contract that provides information technology support services to the USFHP designated providers. DOD officials told us that this contract, which is expected to cost $21 million over 5 years, exclusively supports the USFHP designated providers and that there is no comparable contract in place for the MCSCs. Additionally, a portion of DOD’s TRICARE Help Desk contract supports the USFHP at a cost of about $272,000 per year, according to DOD officials. If the USFHP did not exist, DOD would potentially save millions of dollars from duplicative administrative costs and profits. In addition to the costs associated with the USFHP designated providers’ capitation payments and support contracts, DOD must expend resources managing various aspects of the USFHP. As we have previously reported, expending resources on unnecessarily duplicative programs is inherently inefficient and that in most of these situations, there are opportunities for greater efficiencies or effectiveness by eliminating For example, DOD officials told us that several unnecessary duplication.officials are responsible for the overall management of the USFHP, which includes modifying the contracts and educating designated providers’ staff about program requirements. Additionally, several DOD officials are responsible for conducting the acquisition process necessary to enter into contracts with the USFHP designated providers, which involves acquisition planning, the development of a request for proposals, and the evaluation of proposals. DOD officials said that the most recent process lasted approximately 9 months, and as a result, they extended the length of the USFHP contracts to 10 years in an effort to reduce the administrative burden associated with it. DOD is also required to annually negotiate capitation payment amounts with each of the six USFHP designated providers. DOD officials told us that the process for negotiating these payments lasts approximately 8 months and includes several steps, such as collecting data from different sources, developing proposed payments, and negotiating the final payment amount. There are several participants in these negotiations from both sides, including a contractor that provides DOD with actuarial assistance. These annual payment negotiations are not necessary for the managed care support contracts, which are structured differently. Eliminating the USFHP would allow DOD to potentially realize savings, as well as better focus its resources on managing other aspects of the TRICARE program. The USFHP’s role in providing care to eligible military beneficiaries became duplicative once the nationwide TRICARE program was implemented in the 1990s. Since the NDAA for Fiscal Year 1997 required the USFHP to offer TRICARE Prime, there has been no assessment of the continued relevancy of this program despite the apparent overlap and duplication with the MCSCs, which has likely resulted in added costs and inefficiencies for DOD. Compounding this problem is the fact that the USFHP operates at odds, often in competition, with the rest of the MHS, which limits DOD’s effort’s to maximize the use of its direct care system of military hospitals and clinics. Moreover, it is not known how much it costs to deliver health care under the program, or the extent of administrative costs and profits that accrue to designated providers, because the designated providers have not been required to, nor have they chosen to, share cost or pricing data with DOD. The existence of a duplicative program that provides the same benefit to the same group of beneficiaries within many of the same service areas runs counter to DOD’s current environment of budget constraints and the department’s need to control rising health care costs. Eliminating this statutorily required program would not only eliminate unnecessary costs and inefficiencies but would also free up departmental resources that could be better used to manage and oversee the TRICARE program. However, it will be important to transition the 134,000 USFHP enrollees to other health care programs prior to eliminating the program to ensure the continuity of their care. According to DOD and MCSC officials, the MCSCs already have the national coverage, capability, and capacity to absorb these enrollees, assuming they do not obtain coverage elsewhere. Given the extent to which the USFHP’s role in offering TRICARE Prime duplicates and overlaps with the MCSCs, there is no compelling reason to maintain the USFHP as part of the MHS and incur the added costs and inefficiencies associated with doing so. To eliminate unnecessary program duplication and to achieve increased efficiencies and potential savings within the integrated MHS, Congress should terminate the Secretary of Defense’s authority to contract with the USFHP designated providers in a manner consistent with a reasonable transition of affected USFHP enrollees into TRICARE’s regional managed care program or other health care programs, as appropriate. We provided a copy of this report to DOD for review and comment. DOD stated that since our recommendation to eliminate the USFHP is addressed to Congress, it defers to Congress to consider it. DOD also reiterated our statement that if the USFHP were to be eliminated, it will be important to make provisions to carefully transition USFHP enrollees to other health care programs. In its response, DOD confirmed that our factual determinations about the USFHP are correct. DOD also provided specific comments on many of the program characteristics we identified, agreeing that they result in unnecessary costs and other inefficiencies for the department. In particular, DOD stated that the USFHP sole source contracts are an exception to congressional and DOD policy favoring competition in contracting. DOD also noted that the unnecessary duplication between the USFHP and MCSCs creates costs for the department, as necessary care is already provided through the MCSCs and the direct care system and that DOD is required to expend considerable resources in its annual negotiations with the designated providers to set capitation payment rates. DOD also stated that the separate existence of the USFHP is an exception to the general DOD policy of favoring an integrated MHS to support readiness and cost-effectiveness. DOD further acknowledged that the USFHP designated providers are allowed to offer their providers higher reimbursements and other incentives that draw beneficiaries away from the more cost effective MCSCs, which DOD characterized as an exception to congressional and DOD policy favoring a uniform TRICARE benefit. DOD’s comments are reprinted in appendix IV. DOD did not provide any technical comments. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense (Health Affairs); and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Past and Current US Family Health Plan (USFHP) Designated Providers 2014 Martin’s Point Health Care Brighton Marine Health Center St. Vincents Catholic Medical Centers Johns Hopkins Medical Services Co. Total Number of USFHP Enrollees 41,752 (of which 8,602 are 65+) Massachusetts, Rhode Island, and northern Connecticut 14,288 (of which 5,497 are 65+) Maine, New Hampshire, Vermont, upstate and western New York, and the northern tier of Pennsylvania 41,991 (of which 12,573 are 65+) Southeast Texas and southwest Louisiana 11,803 (of which 6,596 are 65+) Puget Sound area of Washington state 13,062 (of which 7,354 are 65+) New Jersey, New York City, southern New York, western Connecticut, and southeastern Pennsylvania 11,164 (of which 3,575 are 65+) Services Co. Discounted vision services can include a free annual eye exam, discounts on glasses and lenses at select providers, or LASIK vision correction. Designated providers offer a free annual hearing exam, plus discounts on hearing aids. Designated providers offer up to four round-trips (eight one-way) to approved covered medical services. Services offered at a discount under this category can include nutritional counseling or gym memberships. This designated provider offers enrollees a $500 discount on any procedure including anti-aging procedures, such as face lifts and rhinoplasty and a 15 percent discount on all injectables including Botox. Enrollees receive an unspecified discount for all in vitro fertilization and intrauterine insemination procedures. In addition to the contact named above, Bonnie Anderson, Assistant Director; Kaitlin Coffey; Christine Davis; Sylvia Jones; Drew Long; Samantha Poppe; James Rebbe; and William T. Woods made key contributions to this report.
DOD provides health care to about 9.6 million eligible beneficiaries through its TRICARE program. The department contracts with MCSCs to administer TRICARE's benefit options in three regions across the United States. Separately, DOD contracts with six USFHP designated providers to offer TRICARE Prime—the managed care option—to enrollees in certain locations across the country. Senate Report 112-173, which accompanied a version of the National Defense Authorization Act for Fiscal Year 2013, mandated that GAO review DOD's health care contracts, citing concerns with the growing costs of these contracts, including the USFHP. For this report, GAO examined (1) the role of the USFHP within the MHS, and (2) the extent to which the USFHP affects DOD's health care costs. GAO analyzed information about the USFHP and the MCSCs, reviewed available USFHP cost data, and interviewed officials from DOD, the designated providers, and the MCSCs. The role of the US Family Health Plan (USFHP) within the Department of Defense's (DOD) current military health system (MHS) is duplicative because it offers military beneficiaries the same TRICARE Prime benefit that is offered by the regional TRICARE managed care support contractors (MCSC). The USFHP is an association of six health care providers, referred to as designated providers, which took ownership and control of U.S. Public Health Service hospitals in 1982 when Congress enacted legislation that made these facilities part of DOD's health care system. During the implementation of TRICARE in the 1990s, Congress required the designated providers to offer the TRICARE Prime benefit to their enrollees. While the USFHP is a relatively small program—approximately 134,000 enrollees—there is significant overlap with the MCSCs in several key areas, including benefits, geographic service areas, and provider networks. For example, four of the six USFHP designated providers have more than 80 percent of their service area zip codes included in areas where the MCSCs offer TRICARE Prime. Furthermore, the USFHP remains a distinct statutory program that is not integrated with the rest of the MHS. This limits DOD's ability to increase efficiency by maximizing the use of its direct care system of military treatment facilities (MTF), which USFHP enrollees are generally precluded from using because the USFHP's payment structure is intended to cover all enrollees' health care costs. The role of the USFHP has not been reassessed since TRICARE was implemented in the 1990s. However, DOD officials told us that there is not a function that the USFHP designated providers serve that the MCSCs could not perform. Furthermore, officials from all three MCSCs—which together serve over 4.5 million Prime enrollees—said that they would likely have the capacity and capability to provide TRICARE coverage to all current USFHP enrollees, if needed. Because the USFHP's role of offering TRICARE Prime is duplicative of the role of the MCSCs, DOD has incurred added costs and inefficiencies. Although DOD would incur health care costs for the USFHP enrollees regardless of with whom they are enrolled, DOD pays administrative costs and profits to two different groups of contractors for providing the same TRICARE Prime benefit to the same population of eligible beneficiaries in many of the same areas. However, outside of the negotiated payment amounts, no one knows the designated providers' actual costs for administering the program since the USFHP contracts are characterized in statute as commercial item contracts. This means that the designated providers are exempt from sharing certified cost or pricing data with DOD, and they have been unwilling to share uncertified cost or pricing data when requested. As a result, DOD does not know how much of the approximately $1.1 billion it pays the USFHP designated providers annually actually goes toward their administrative costs and profit versus the cost of health care services. DOD also incurs other expenses for the USFHP through support contracts, including a $21 million data support contract, and through the management of various aspects of the program. Eliminating this statutorily required program would not only eliminate unnecessary costs and inefficiencies, but would also free up departmental resources that could be better used to manage other aspects of the TRICARE program. Congress should terminate DOD's authority to contract with the USFHP designated providers in a manner consistent with a reasonable transition of affected USFHP enrollees into TRICARE's regional managed care program or other health care programs as appropriate. DOD confirmed that GAO's factual determinations about the USFHP are correct and agreed that this program is duplicative and results in unnecessary costs and other inefficiencies. DOD reiterated the importance of carefully transitioning USFHP enrollees to other health plans if the USFHP were eliminated.
IRS improved its 2006 filing season performance in important areas that affect large numbers of taxpayers. This continues a trend of improvement since at least 2002. Returns processing has gone smoothly and electronic filing continues to grow, although at a slower rate than in previous years. Taxpayer assistance has improved in the two most commonly used services—toll-free telephones and the Internet Web site. Fewer taxpayers visited IRS’s walk-in sites, and more sought assistance at volunteer-staffed sites. From January 1 through April 14, 2006, IRS processed 91.8 million individual income tax returns. Of those returns, 63.4 million returns were filed electronically (up 2.3 percent) and 28.3 million returns were filed on paper (down 7.1 percent). According to IRS data and officials, returns processing has gone smoothly so far this filing season. IRS issued 78 million refunds, 50 million, or 64 percent of which were directly deposited, up 5.4 percent over the same period as last year. Direct deposit is faster, more convenient for taxpayers, and less expensive for IRS than mailing paper checks. Because of the volume of tax returns, it is normal for IRS to experience some processing disruptions, although this year, disruptions have not been significant. For example, 13 different tax forms were unavailable for electronic filing until February 1 due to the late hurricane relief legislation, which caused a minor processing delay for some returns. Furthermore, IRS officials said that the new Customer Account Data Engine (CADE), which is intended eventually to replace IRS’s antiquated Master File system containing taxpayer records, processed over 6 million returns and dispersed 5.3 million refunds as of April 14, 2006 without disruptions. IRS is reporting that direct deposit refunds and paper check refunds are being issued within 4 and 6 business days, respectively, after tax returns are posted to CADE, which is faster than for returns processed by the Master File system. CADE’s growth in future years will directly benefit taxpayers. Not only can it speed up refunds, but it also updates taxpayer account information quicker than the Master File system. Representatives of the taxpayer industry corroborated IRS’s view that the filing season is going smoothly. Groups and organizations that we talked to included the National Association of Enrolled Agents, the American Institute of Certified Public Accountants, and others. In addition, the TIGTA recently testified that thus far it has seen no significant problems during the filing season. The growth of electronic filing is important, because it generates savings by reducing staff years needed for labor intensive paper processing. Between fiscal years 1999 and 2006, IRS reduced the number of staff years devoted to paper and electronic processing by 1,586, or 34 percent as shown in figure 1. Electronic filing continues to grow but at a slower rate than previous years. This year’s 2.1 percent rate of growth is less than the average annual rate of growth of 5.1 percent for each of the preceding 2 years. According to IRS officials, the slower growth in electronic filing this year is due, in part, to changes in the Free File program, which reduced the number of taxpayers eligible to file electronically for free this year and reduced publicity and advertising by companies involved in that program, and the termination of the TeleFile program, which eliminated the way for taxpayers to file their returns electronically via telephone. The Free File program enables taxpayers to file their returns electronically via IRS’s Web site. Through IRS’s Web site, taxpayers can access the Web sites of 20 companies comprising the Free File Alliance. The alliance is a consortium of tax preparation companies that agreed to offer free return preparation and electronic filing for taxpayers that meet certain criteria (see app. 1 for further detail). In an amended agreement with IRS that took effect this year, the Free File Alliance set a $50,000 income limitation on taxpayer participation. This limit was absent last year and reduced the number of taxpayers eligible to participate in the program. As of April 13, 2006, IRS processed about 3.5 million free file returns, which is a decrease of 24 percent from the same period last year. This decline is inconsistent with IRS’s projection that it would receive 6 million tax returns filed through the Free File program, almost a million more compared to last year. For 2006, IRS terminated the TeleFile program. IRS expected that eliminating TeleFile would reduce electronic filing, but justified the decision because of declining usage and relatively high costs. The number of taxpayers using the program had been decreasing—from approximately 5.7 million in 1999 to 3.8 million in 2004. IRS estimated the cost per tax return submitted through TeleFile, typically Form 1040EZ, to have been $2.63 versus $1.51 for a return filed on paper, largely due to contractor, telecommunications, and other costs. Given the limitations of IRS’s cost accounting system, the validity of these figures is unknown. IRS officials stated that the reason for this year’s increase in the number of 1040EZ returns filed on paper is due, in part, to the elimination of TeleFile. Through April 14, 2006, the number of 1040EZ returns filed on paper has increased 20 percent from last year. Options for increasing electronic filing, in particular mandated electronic filing, will be discussed in the budget section of this statement. Taxpayers’ ability to access IRS’s telephone assistors and the accuracy of answers provided improved compared to previous years. From January 1 through April 15, 2006, IRS answered approximately 31 million phone calls, which is about a 7 percent decline from the same period last year. The call volume has been less than projected by IRS and less than was assumed when IRS set staffing levels for telephone assistors for the filing season. IRS officials offered several explanations for the unexpected decline in call volume. One explanation is that more taxpayers are using improved tax preparation software, which reduces their need to call IRS. Another explanation is that more taxpayers are getting through to a telephone assistor the first time they call, thus reducing the need for taxpayers to call again. As shown in table 1, the percentage of taxpayers who attempted to reach an assistor and actually got through and received service—referred to as the level of service—is 83.3 percent so far this filing season compared to 81.7 percent over the same period last year—and greater than its 2006 fiscal year goal of 82 percent. According to IRS officials, one possible explanation for the improvement in access is the decline in overall call volume. When call volume decreases, taxpayers are likely to wait less time to speak with an IRS telephone assistor. As a result, fewer taxpayers would likely hang up, increasing the percentage of taxpayers who get through to an assistor. IRS also reported that, so far this filing season, the average speed of answer (length of time taxpayers wait to get their calls answered) is down 51 seconds from the same time last year to 205 seconds, a decrease of about 20 percent, and significantly better than IRS’s 2006 fiscal year goal of 300 seconds. IRS also reported that the rate at which taxpayers abandoned their calls to IRS decreased from 12 percent to 9.7 percent compared to the same period last year. Using a statistical sampling process, IRS estimates that the accuracy of telephone assistors’ responses to taxpayers’ tax law and account questions improved compared to last year. IRS estimates its tax law accuracy rate to be 90 percent, an increase of 1.9 percentage points over the same time period last year, continuing an improvement since 2004. Additionally, IRS estimates that the accuracy rate to taxpayers’ inquiries about their accounts, to be 92.9 percent this year compared to 91.6 percent over the same period last year, continuing an improvement since 2003. IRS officials attribute these improvements in performance to several factors, including better and more timely performance feedback for telephone assistors, increased assistor experience, better training, and increased use of the Probe and Response Guide, a script used by telephone assistors to understand and respond to tax law questions. Use of IRS’s Web site has increased so far this filing season compared to prior years based on the number of visits and downloads. From January 1 through March 31, IRS’s Web site was visited 95 million times by visitors who downloaded 90 million forms and publications. The number of visits reflects a 7.5 percent increase over the same period last year while the number of forms and publications downloaded has increased by 34 percent. Further, IRS’s Web site is performing well. For example, we found IRS’s Web site to be readily accessible, easy to navigate, and an independent weekly study by Keynote, a company that evaluates Web sites, reported that IRS’s Web site has repeatedly ranked second out of 40 government agencies evaluated in terms of average download time. The same study also reported that IRS has repeatedly ranked first out of the most commonly accessed government related Web sites for response time and success rate, and the American Consumer Satisfaction Index overall customer satisfaction with IRS’s Web site increased from 68 to 72 percent after IRS reconfigured the site. IRS reconfigured its Web site for the 2006 filing season. According to IRS officials, the goal for reconfiguring the Web site was to improve overall customer service through easier navigation and a more effective search function. As a result, the number of Web site searches has decreased by 51 percent, from 108 million during the same period last year to 52.5 million this year. Typically, search functions are used when users fail to find information through links. According to IRS officials, the decrease in the number of searches indicates that users are finding the information that they need faster. IRS also added the following new features to its Web site this year: Electronic IRS: The Electronic IRS brand reconfigured the IRS’s Web site and made it easier to locate items, as evidenced by the decline in searches; Alternative Minimum Tax (AMT) Assistant: Helps taxpayers determine if they do not owe AMT; and Help for Hurricane Victims: A special link that provides victims of the recent hurricanes information on special tax relief, assistance and how to get help with tax matters. IRS’s Web site continues to include several important features in addition to the Free File program: Where’s My Refund, which allows taxpayers to check on the status of their refunds. As of April 15, 2006, 24 million taxpayers accessed the Where’s My Refund feature to check on the status of their tax refunds. This was a 17 percent increase from the same period last year; and Electronic Tax Law Assistance, where taxpayers can ask IRS general tax law questions via its Web site. From January 1 through April 14, 2006, IRS received 10,160 emails requesting tax law assistance (down over 43 percent compared to last year). As of March 31, 2006, IRS estimated the accuracy rate of IRS’s responses to tax law questions submitted via the Web site, to be 85 percent, similar to 2005. However, the average number of days that it took IRS to respond to tax law questions submitted via the Web site was 2.2 days, compared to 1.2 days in 2005. Fewer taxpayers have used IRS’s 400 walk-in sites so far in the 2006 filing season compared to the same period in prior years. Staff at walk-in sites provide taxpayers with information about their tax accounts and answer taxpayers’ questions within a limited scope of designated tax law topics, such as those related to income, filing status, exemptions, deductions, and related credits. Walk-in site staffs also provide need-based tax return preparation assistance, limited to taxpayers meeting certain requirements. As of April 1, 2006, the total number of contacts at IRS’s walk-in sites declined by approximately 12 percent compared to last year. The decline thus far this year is consistent with the annual trends in walk- in use shown in figure 2, including IRS’s projection for 2006. The declines in the number of taxpayers using IRS’s walk-in sites, including for tax return preparation, are also consistent with IRS’s strategy to reduce its costly face-to-face assistance by providing taxpayers with additional options, such as IRS’s toll-free telephone service, Web site, and numerous volunteer sites. It is unclear, however, whether the declining volume is an indicator of how well IRS is meeting taxpayers’ demand for face-to-face assistance. For example, IRS does keep track of the number of taxpayers entering a walk-in site, taking a number to queue for service, but then leaving the site without receiving service. If a taxpayer did not take a number, IRS would have no way of counting those taxpayers. IRS officials said the types of services offered at walk-in sites remained constant for most sites from 2005 to 2006. For sites in areas with a high number of natural disaster victims, IRS expanded the types of assistance provided. For example, IRS adjusted the type of tax law questions that it would answer at walk-in sites to include casualty loss and removed income limitations for disaster victims seeking return preparation assistance at walk-in sites. In contrast to IRS walk-in sites, the number of taxpayers seeking return preparation assistance at approximately 14,000 volunteer sites has increased this year by 8.7 percent, continuing the trend since 2001 (see fig. 2). These sites, often run by community-based organizations and staffed by volunteers who are trained and certified by IRS, do not offer the range of services IRS provides at walk-in sites, but instead focus on preparing tax returns primarily for low-income and elderly taxpayers and operate chiefly during the filing season. As we have previously reported, the shift of taxpayers from walk-in to volunteer sites is important because it has allowed IRS to transfer time-consuming services, such as return preparation, from IRS to other less costly alternatives that can be more convenient for taxpayers. IRS has used both walk-in and volunteer sites to provide relief efforts for federally-designated disaster zones such as in hurricane-affected areas. IRS developed a Disaster Referral Services Guide and new training materials for employees to better equip them to address disaster-related issues. In addition to the expanded services for disaster victims at IRS walk-in sites noted above, volunteer sites performed outreach within their network of partners by creating training material for tax preparers, and agreeing with two organizations to accept referrals from IRS of disaster victims needing tax return preparation assistance. IRS continues to lack reliable data on the quality of the services provided at walk-in and volunteer sites. As in previous years, TIGTA is conducting an audit on the accuracy of some services provided at walk-in sites, although the results will not be available until after the filing season. However, TIGTA has noted problems with the quality of services provided at IRS walk-in sites in prior reports. We have made recommendations for IRS to improve its quality measurement at walk-in sites. At volunteer sites, IRS is conducting different types of reviews to monitor tax return preparation assistance. According to IRS officials, the results to date show that the quality of service has improved at volunteer sites compared to previous years, but they acknowledge that challenges remain in terms of volunteers’ adherence to IRS’s procedures and use of IRS materials. As in previous years, TIGTA will conduct limited quality reviews at volunteer sites. While the results of those reviews are based on a judgmental sample, TIGTA has concluded in the past that, while significant improvements have been made in the oversight of volunteer sites, continued effort is needed to ensure the accuracy of tax return assistance provided. In addition to service from IRS, millions of taxpayers receive service, such as tax return preparation, from paid preparers. About 56 percent of about 130 million individual tax returns filed for tax year 2002 used a paid tax preparer, with higher paid preparer usage among taxpayers with more complicated returns. We recently reported, however, that some preparers make serious errors when completing returns. Based on our very limited sample of 19 paid tax preparers, taxpayers who rely on tax preparers to provide them with accurate, complete, and fully compliant tax returns may not get what they pay for. For example, during visits to paid preparers, tax returns prepared for GAO often varied widely from what we determined the returns should and should not include, sometimes with significant consequences. Their work resulted in unwarranted extra refunds of up to almost $2,000 in 5 instances, while in 2 cases they cost the taxpayer over $1,500. Some of the most serious problems involved paid preparers: not reporting business income in 10 of 19 cases; not asking about where a child lived or ignoring GAO’s answer to the question therefore allowing an ineligible child to be claimed for the Earned Income Tax Credit in 5 out of the 10 applicable cases; failing to take the most advantageous postsecondary education tax benefit in 3 out of the 9 applicable cases; and failing to itemize deductions at all or failing to claim all available deductions in 7 out of the 9 applicable cases. Many of the problems we identified put paid preparers, taxpayers, or both at risk of IRS enforcement actions. According to IRS officials, paid preparers and taxpayers risk enforcement action by filing a tax return that includes the types of misstatements or omissions we reported. According to the officials, if IRS were to uncover problems with the preparation of real tax returns similar to several that we found, the preparers would be subject to civil sanctions. If an erroneous return was prepared, in addition to paying the correct tax due and any related late payment interest, the taxpayer may also be assessed a penalty, depending on the facts and circumstances of each situation, according to IRS officials. For example, if taxpayers substantially understate income, overstate deductions, or provide other incorrect information resulting in decreased tax or improperly high refunds, they may be assessed an accuracy-related penalty. The penalty could be assessed for any failure to comply with the tax laws, including the failure to report self-employment income (further discussion on the consequences of the errors of paid tax preparers is provided in the discussion on the tax gap). IRS’s fiscal year 2007 budget request is a small decrease compared to 2006 enacted levels after adjusting for expected inflation. It proposes to reduce overall staffing levels, as well as staffing levels for taxpayer service and enforcement activities, while maintaining or improving taxpayer service and enforcement. As it has in prior years, IRS has identified some savings, but additional opportunities exist for enhancing savings. IRS’s proposed fiscal year 2007 budget is $11 billion (a 1.6 percent increase), but after adjusting for expected inflation, it reflects a slight decrease over last year’s enacted budget. The $11 billion includes $417 million from new and existing user fees and reimbursable agreements with other federal agencies. The 2007 budget request for IRS’s appropriation accounts is shown in table 2 (see app. II for more details). The real decrease in the proposed budget can be seen in staffing. IRS proposes to fund 95,476 FTEs in fiscal year 2007, down over 2 percent from 97,754 FTEs in enacted fiscal year 2006 (see table 5 in app. II for comparisons in enacted FTE levels for fiscal years 2002 through 2007). Actual FTEs tend to be lower than enacted FTEs, in part, because of how IRS absorbs unbudgeted costs (see table 6 in app. II for actual FTEs). The decrease in FTEs may be greater than shown in IRS’s fiscal year 2007 budget request. Every year agencies, including IRS, are expected to absorb some costs that are not included in their budget requests. For fiscal year 2007, IRS officials currently anticipate having to absorb over $117 million in costs, including about $41 million for homeland security-related controls over physical access to government facilities. Absorbing such costs reduces the actual number of FTEs that IRS can support. For example, for fiscal year 2005, the enacted level of FTEs was 96,435 but the actual level was 94,282. IRS is requesting $4.2 billion for PAM, including some user fees, which is funding primarily spent on providing service to taxpayers. The amount requested is about a 1.6 percent increase over fiscal year 2006 enacted levels, but is a slight decrease after adjusting for expected inflation. This funding level translates into reduced staffing, down over 4 percent from an enacted level of 38,796 FTEs in fiscal year 2006 to 37,126 proposed FTEs in fiscal year 2007. Since fiscal year 2002, FTEs devoted to PAM have declined over 15 percent from an enacted level of 43,866 FTEs. Despite the proposed inflation-adjusted decrease in funding in 2007, IRS is planning to maintain or improve taxpayer services. For every one of the major taxpayer services listed in the budget, 2007 planned performance goals are higher or equal to 2006 performance goals. These services include telephone assistance and refund issuance. IRS is requesting $4.8 billion for TLE. The 2007 budget request proposes an overall decrease in enforcement FTEs, down over 2 percent to a proposed 49,479 FTEs from last year’s enacted level of 50,559 FTEs. For its three main categories of skilled enforcement staff, IRS is proposing a marginal increase in staffing of 0.2 percent (see fig. 3). For special agents (those who perform criminal investigations), the increase is 1.7 percent. For the other two categories—revenue agents (those who examine complex returns), revenue officers (those who perform field collection work)—IRS is proposing to keep the number of staff the same as in 2006. Despite keeping skilled enforcement staff virtually unchanged, IRS is proposing to maintain or increase its major enforcement activities. For all the major enforcement activities listed in the budget, IRS is establishing goals in 2007 that are higher or equal to 2006 planned performance goals. Major enforcement activities include individual taxpayer examinations, collection coverage, and criminal investigations completed. IRS officials anticipate increased revenue collected and other performance improvements as a result of using data from IRS’s most current compliance research effort, known as the National Research Program (NRP). IRS is requesting about $1.6 billion for IS in fiscal year 2007, which is intended to fund information technology (IT) staff and related costs for activities such as information security and maintenance and operations of its current tax administration systems. Although the number of FTEs proposed in 2007 is up when enacted FTEs are considered, it is virtually the same as the operating level currently assumed in 2006 (see app. II for more details). In 2002, we reported that the agency did not develop its fiscal year 2003 IS operations and maintenance budget request in accordance with the investment management approach used by leading organizations. We recommended that IRS prepare its future budget requests in accordance with these best practices. To address our recommendation, IRS agreed to take a variety of actions, which it has made progress in implementing. For example, IRS planned to develop a capital planning guide to implement processes for capital planning and investment control, budget formulation and execution, business case development, and project prioritization. In August 2005, IRS issued the initial version of its IT Capital Planning and Investment Control (CPIC) Process Guide, which (1) provides executives with the framework within which to select, control, evaluate, and maintain the portfolio of IT investments to best meet IRS business goals and (2) defines the governance process that integrates the agency’s IT investments with the strategic planning, budgeting, and procurement processes. According to IRS officials and documentation, the agency formulated its prioritized fiscal year 2007 IT portfolio and associated budget request, including operations and maintenance requirements, in accordance with this CPIC Process Guide. We will continue to monitor the implementation of IRS’s CPIC process as its IT investment management process matures. In addition, IRS stated that it planned to develop an activity-based cost module to plan, project, and report costs for business tasks/activities funded by the IS budget. During fiscal year 2005, as part of the first release of the Integrated Financial System (IFS), IRS implemented a cost module that is potentially capable of allocating costs by activity. However, agency officials stated that they needed to accumulate 3 years of actual costs to have the historical cost data necessary to provide a basis for meaningful future budget estimates. Since then, according to the Office of the Chief Financial Officer, IRS has (1) populated the cost module with all actual fiscal year 2005 expenses; (2) identified the data needed from IFS to support its budget requests; and (3) developed a system to capture, test, and analyze the cost data to devise a standard methodology to provide the necessary data from the cost module. IRS still expects to have the requisite 3 years of historical cost data available in time to support development of the fiscal year 2010 budget request. Although IRS has made progress in implementing best practices in developing its IS operations and maintenance budget, until IRS completes the actions necessary to fully implement the activity–based cost module, the agency will not be able to ensure that its request is adequately supported. BSM is a high-risk, highly complex effort that involves developing and delivering a new set of information systems that are intended to replace the agency’s aging tax processing and business systems. The program is critical to supporting IRS’s taxpayer service and enforcement goals. For example, BSM includes projects to allow taxpayers to file and retrieve information electronically and to provide technology solutions to help reduce the backlog of collections cases. It also helps IRS considerably in providing the reliable and timely financial management information needed to account for the nation’s largest revenue stream and better enable the agency to both determine and justify its resource allocation decisions and budget requests. IRS’s fiscal year 2007 budget request of $167.3 million for the BSM program reflects a reduction of about 15 percent (and even greater when adjusted for expected inflation), or about $30 million, from the enacted fiscal year 2006 budget of $197 million. Over the past year, IRS has made further progress in implementing BSM, although some key projects did not meet short-term cost and schedule commitments. During 2005 and the beginning of 2006, IRS deployed additional releases of several modernized systems that have delivered benefits to taxpayers and the agency, including CADE, e-Services (a new Web portal and electronic services for paid tax preparers), and Modernized e-File (a new electronic filing system). While three BSM project releases were delivered within the cost and/or schedule commitments presented in the fiscal year 2005 expenditure plan, others experienced cost increases or schedule delays. For example, two IFS and Modernized e-File project release segments experienced cost increases of 93 percent and 29 percent, respectively. As we have previously reported, the BSM program has had a history of cost increases and schedule delays that have been due, at least in part, to deficiencies in various management controls and capabilities that have not yet been fully corrected. IRS is in the process of implementing our prior recommendations to correct these deficiencies. IRS has identified significant risks and issues that confront future planned system deliveries. For example, according to IRS, schedule delays and contention for key resources between multiple releases of CADE necessitated the deferral of some functionality. This deferral, in conjunction with additional recently reported risks and issues, may negatively impact the cost, schedule, and functionality for future CADE releases. The agency recognizes the potential impact of these project risks and has developed mitigation strategies to address them. We will, however, continue to monitor the various risks IRS identifies and the agency’s strategies to address them and will report any concerns. IRS also has made additional progress in addressing high-priority BSM program improvement initiatives during the past year, including initiatives related to shifting the role of systems integrator from the prime contractor to IRS and establishing requirements management standards—an initiative on which we recently issued a report to you, Mr. Chairman, and made a number of recommendations for improvement. IRS’s program improvement process appears to be an effective means of assessing, prioritizing, and addressing BSM issues and challenges. However, much more work remains for the agency to fully address these issues and challenges. In addition, in response to our prior recommendation, IRS is developing a new Modernization Vision and Strategy to address BSM program changes and provide a modernization roadmap. According to the Associate Chief Information Officer for BSM, the agency’s new strategy focuses on promoting investments that provide value in smaller, incremental releases that are delivered more frequently, with the goal of increasing business value. IRS is currently finalizing a high-level vision and strategy as well as a more detailed 5-year plan for the BSM program. We believe these actions represent sound steps toward addressing our prior recommendation to fully revisit the vision and strategy and develop a new set of long-term goals, strategies, and plans consistent with the budgetary outlook and with IRS’s management capabilities. Further, this strategy is important because it will describe how and when IRS will receive the full benefits from its modernization efforts, such as when CADE will be able to replace the Individual Master File. While the requested fiscal year 2007 BSM budget will allow IRS to continue the development and deployment of the CADE, Modernized e- File, and Filing and Payment Compliance (F&PC) projects, the proposed reduced funding level would likely affect the agency’s ability to deliver the functionality planned for the fiscal year and could result in project delays and/or scope reductions. This could, in turn, impact the long-term pace and cost of modernizing tax systems and of ultimately improving taxpayer service and strengthening enforcement. For example, according to IRS documents, the agency had planned to spend $85 million in fiscal year 2007 to develop and deploy additional CADE releases that would enable the system to process up to 50 million individual tax returns by the 2008 filing season and issue associated refunds faster. However, with a proposed budget of $58.5 million—over 30 percent less than anticipated— IRS would likely have to scale back its planned near-term work on this project. In addition, the reductions to the planned budgets for the Modernized e-File and F&PC projects may also result in IRS having to redefine the scope and/or reassess schedule commitments for future project releases. The proposed BSM budget reduction would also significantly reduce the amount allotted to program management reserve by about 82 percent (from $13 million in fiscal year 2006 to $2.3 million in fiscal year 2007). If BSM projects have future cost overruns that cannot be covered by the depleted reserve, this reduction could result in increased budget requests in future years or delays in planned future activities. While the BSM program still faces challenges, IRS has recently made progress in delivering benefits and addressing project and program-level risks and issues. Reducing BSM funds at a time when benefits to taxpayers and the agency are being delivered could adversely impact the momentum gained from recent progress and result in delays in the delivery of future benefits. However, until IRS addresses our prior recommendation by clearly defining its future goals for the BSM program as well as the impact of various funding scenarios on meeting these goals in its new Modernization Vision and Strategy, the long-term impact of the proposed budget reduction is unclear. In its 2007 budget request, IRS identified savings as it has done in prior years and plans to redirect some of those savings to front-line taxpayer service and enforcement activities. IRS is proposing to save over $121 million and 1,424 FTEs by, for example, automating the process of providing an individual taxpayer identification number to those taxpayers ineligible for a Social Security number and improving data collection techniques and work processes for enforcement activities through increased financial reporting requirements and scanning and imaging techniques. IRS’s history of realizing savings proposed in past budget requests provides some confidence that the agency will be able to achieve savings in fiscal year 2007. For example, IRS reported it realized 88 percent of the anticipated dollar savings and 86 percent of the anticipated staff savings identified in the fiscal year 2004 budget request. IRS also reported exceeding the savings targets in the fiscal year 2005 budget request (see app. III). In addition to the areas identified by IRS in its budget request, there may be additional opportunities for efficiency gains. Increasing electronic filing: In an era of tight budgets, continued growth in electronic filing may be necessary to help fund future performance improvements. One proposal for continuing to increase electronic filing is additional use of electronic filing mandates. Currently, IRS mandates electronic filing for large corporations. The 2007 budget request proposes a legislative change that would expand its authority to require electronic filing for businesses. Moreover, 12 states now mandate electronic filing for certain classes of tax preparers (see app. IV for more information on state mandates). As we have reported, although there are costs and burdens likely to be associated with electronic filing mandates for paid tax preparers and taxpayers, state mandates have generated significant increases in electronic filing. IRS has an electronic filing strategy, which the agency is updating. Changing the menu of taxpayer services: IRS currently lacks a comprehensive strategy explaining how its various taxpayer services (including its telephone, walk-in, volunteer, and Web site assistance) will collectively meet taxpayer needs. In response to a Congressional directive, IRS is developing such a strategy. The strategy is important because some taxpayers may not be well served by the current service offerings. IRS’s attempts to reduce some taxpayer services, namely reducing the hours of telephone operations and closing some walk-in sites, have met with resistance from the Congress. Although congressional directives to study the impact of IRS’s actions exist, we still believe there may be opportunities to adjust IRS’s menu of services to reduce costs, without affecting IRS’s ability to meet taxpayers’ needs. Consolidating telephone call sites: IRS operates 25 call sites throughout the country. Consistent with earlier plans, IRS closed two of its smallest call sites—Chicago and Houston—in March 2006, to realize savings in its toll-free telephone operations. Also, IRS has gained efficiencies from using a centralized call router located in Atlanta. As a result, there are currently more than 850 workstations that are not being used; consequently, IRS may have the potential to close several additional call sites. Consolidations would not affect telephone service and would be invisible from the taxpayer’s perspective. Managing a federal agency as large and complex as IRS requires managers to constantly weigh the relative costs and benefits of different approaches to achieving the goals mandated by the Congress. Management is constantly called upon to make important long-term strategic as well as daily operational decisions about how to make the most effective use of the limited resources at its disposal. As constraints on available resources increase, these decisions become correspondingly more challenging and important. In order to rise to this challenge, management needs to have current and accurate information upon which to base its decisions, and to enable it to monitor the effectiveness of actions taken over time so that appropriate adjustments can be made as conditions change. In its ongoing effort to make such increasingly difficult resource allocation decisions and defend those decisions before the Congress, IRS has long been hampered by a lack of current and accurate information concerning the costs of the various options being considered. Instead, management often has relied on a combination of the limited existing cost information; the results of special analysis initiated to establish the full cost of a specific, narrowly defined task or item; and estimates based on the best judgment of experienced staff. This has impaired IRS’s ability to properly decide which, if any, of the options at hand are worth the cost relative to the expected benefits. For example, accurate and timely cost information may help IRS consider changes in the menu of taxpayer services that it provides by identifying and assessing the relative costs, benefits, and risks of specific projects. Without reliable cost information, IRS’s ability to make such difficult choices in an informed manner is seriously impaired and IRS cannot prepare cost-based performance measures to assist in measuring the effectiveness of its programs over time. Further, IRS does not have the capability to develop reliable information on the return on investment for each category of taxpayer service and enforcement. IRS lacks reliable information on both the return from services (the additional revenue collected by helping taxpayers understand their tax obligations) and the investment or cost of the services. While developing return on investment information is difficult, the cost component of that equation may be the least complex to develop. Having such cost information is a building block for developing return on investment estimates. For its enforcement programs, IRS has developed a rough measure of return on investment in terms of tax revenue that is directly assessed from uncovering noncompliance. Continuing to develop return on investment measures could help officials make more informed decisions about allocating resources. Even without return on investment information, cost information can help IRS determine if, for example, IRS should change the menu of services provided. As discussed in the BSM section, in fiscal year 2005, IRS implemented a cost accounting module as part of IFS. However, while this module has much potential and has begun accumulating cost information, IRS has not yet determined what the full range of its cost information needs are or how best to tailor the capabilities of this module to serve those needs. Also, IRS does not have an integrated workload management system which would provide the cost module with detailed allocation of personnel cost information. In addition, as noted in developing its IS budget, because it generally takes several years of historical cost information to support meaningful estimates and projections, IRS cannot yet rely on IFS as a significant planning tool. It will likely require several years, implementation of additional components of IFS, and integration of IFS with IRS’s tax administration activities before the full potential of IFS’s cost accounting module will be realized. Furthermore, IRS’s fiscal year 2007 BSM budget request does not include funding for additional releases of IFS. In the interim, IRS decision making will continue to be hampered by inadequate underlying cost information. For the first time, IRS’s budget request sets long-term goals aimed at reducing the tax gap, although IRS does not have a data-based plan for achieving the goals. However, because of its persistence, reducing the tax gap requires solutions which go beyond funding and staffing for IRS. IRS established two agencywide, long-term performance goals, as shown in table 3. IRS plans to improve voluntary compliance from 83 percent in 2005 to 85 percent by 2009, and reduce the number of taxpayers who think it is acceptable to cheat on their taxes from 10 percent in 2005 to less than 9 percent in 2010. According to IRS, these are the first in a series of quantitative goals that will link to its three strategic goals—improve taxpayer service, enhance tax law enforcement, and modernize IRS through technology and processes. These goals will be challenging to meet, because for three decades, IRS has consistently reported a persistent, relatively stable tax gap. Although IRS has made a number of changes in its methodologies for measuring the tax gap, which makes comparisons difficult, regardless of methodology used, the voluntary compliance rate that underpins the gap has tended to range from around 81 percent to around 84 percent. Because of a lack of quantitative estimates of how changes to its service and enforcement programs affect compliance, IRS is unable to show in a data-based plan how it will use those programs to reach the two long-term goals shown in table 3. If IRS could quantify the impact of its service and enforcement programs on the compliance rate or attitudes towards cheating, it could use the information to show the kinds of changes to the programs needed to achieve the long-term goals and how best to direct resources towards achieving those goals. Unfortunately, quantifying the impact of IRS’s service and enforcement programs on compliance or cheating is very challenging. The type of data needed to make such a link does not currently exist, and may not be easy to collect. Lacking such quantitative estimates, IRS must take a more qualitative approach in its plans for increasing compliance, which would likely also involve changing attitudes towards cheating. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. We recently reported that IRS has taken a number of steps that may improve its ability to reduce the tax gap. Favorable trends in staffing of IRS enforcement personnel; examinations performed through correspondence, as opposed to more complex face-to-face examinations; and the use of some enforcement sanctions such as liens and levies are encouraging. Also, IRS has made progress with respect to abusive tax shelters through a number of initiatives and recent settlement offers that have resulted in billions of dollars in collected taxes, interest, and penalties. Finally, IRS has continually improved taxpayer service by increasing, for example, the accuracy of responses to tax law questions. The effect of this overall approach and the 2007 budget proposal will have on voluntary compliance has not been quantified by IRS. Therefore, the Congress will have to rely on the IRS Commissioner for qualitative explanations, of why, in his judgment, IRS’s mix of taxpayer service and enforcement and overall approach for reducing the tax gap, including the 2007 budget proposal, will be sufficient to start IRS on a path towards achieving its long-term goals. More specifically, such explanations could include a clear statement of which service and enforcement programs have priorities for expansion because they are expected to contribute the most to increasing the compliance rate and the evidence that supports that judgment. In addition, IRS lacks a plan for measuring progress towards one goal— improving voluntary compliance. IRS plans to measure progress towards the second goal—reducing the percentage of taxpayers who think it is acceptable to cheat—via the IRS Oversight Board’s annual Taxpayer Attitude Survey. Nevertheless, IRS recently estimated voluntary compliance as part of the NRP study, which reviewed the compliance of a random sample of individual taxpayers and used those results to estimate compliance for the population of all taxpayers. The study took several years to plan and execute. In addition to providing an estimate of the compliance rate, the study’s results will be used to better target IRS’s audits of potentially non- compliant taxpayers. Better targeting reduces the burden on taxpayers because IRS is better able to avoid auditing compliant taxpayers. At this time, however, IRS has not made plans to repeat the study in time to measure compliance by 2009. Furthermore, doing compliance studies once every few years does not give IRS or others information about what is happening in the intervening years. Annual estimating of the compliance rate could provide information that would enable IRS management to adjust plans as necessary to help achieve the goal in 2009. One option that would not increase the cost of estimating compliance would be to use a rolling sample. IRS Oversight Board officials and we agree that instead of sampling, for example, once every 5 years, one-fifth of the sample could be collected every year. The total sample could include 5 years worth of data—with each passing year the oldest year would be dropped from the sample and the latest year added. The availability of current research data would allow IRS to more effectively focus its service and compliance efforts. For years, we have reported that tax law enforcement is a high-risk area, in part because of the size of the gross estimated tax gap, which IRS most recently estimated to be $345 billion for tax year 2001. IRS estimated it would recover around $55 billion through late payments and enforcement revenue, resulting in a net tax gap of around $290 billion. Reducing the tax gap would yield significant revenue and even modest progress, such as a 1 percent reduction, would likely yield nearly $3 billion annually. In recent years, IRS reported increases in enforcement revenue—revenue brought in as a result of IRS taking enforcement action. Between fiscal years 2003 and 2005, IRS reported that enforcement revenue grew from $37.6 billion to $47.3 billion, with a level of $48.1 billion estimated for 2006. However, the voluntary compliance rate has persisted at a relatively stable level. Further, GAO recently reported that tax returns prepared by paid tax preparers often contained errors such as unwarranted extra refunds and underreported income. These findings are consistent with NRP data which indicate that tax returns prepared by paid preparers contained a significant level of errors. These errors, whether they are the fault of the preparer or the result of taxpayers providing incomplete or inaccurate information, contribute to the tax gap. We have reported that significant reductions in the tax gap will likely require exploring new and innovative solutions. Such solutions that may not require significant additional IRS resources, but are nonetheless difficult to achieve, include simplifying the tax code to make it easier for individuals and businesses to understand and comply with their tax obligations; increasing tax withholding for income currently not subject to withholding; improving information reporting; and leveraging technology to improve IRS’s capacity to receive and process tax returns. IRS’s 2007 budget request includes five new legislative proposals to address some of these solutions to reduce the tax gap, along with a proposal to study independent contractor compliance that would not require additional resources. In recent testimony, the IRS Commissioner stated that the amount of enforcement revenue IRS expects from the legislative proposals will be $3.6 billion over the next 10 years (about 0.1 percent of the tax gap). However, the proposals should also increase revenue voluntarily paid without any IRS enforcement actions. The amount of that revenue is uncertain. The IRS Commissioner recognizes the implications of the tax gap and states in the budget that addressing it is a top priority. Although IRS’s 2007 budget request does not propose allocating IRS resources to new initiatives to reduce the tax gap, according to IRS officials, they plan to continue initiatives identified in prior budgets. For example, IRS has two ongoing BSM projects—F&PC and Modernized e-File—which, according to IRS’s Associate Chief Information Officer for BSM, could help reduce the tax gap. F&PC is expected to increase IRS’s capacity to resolve the growing backlog of delinquent taxpayer cases and increase collections, while Modernized e-File is expected to help make it easier for IRS to process tax returns, look for irregularities, and track down unpaid taxes. The budget request states that the administration will study the standards used to distinguish between employees and independent contractors for purposes of paying and withholding income taxes. We have long supported efforts aimed at improving independent contractor compliance. Past IRS data have shown that independent contractors report 97 percent of the income that is reported on information returns to IRS, while contractors that do not receive these information returns report only 83 percent of income. We have also identified other options for improving information reporting by independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to separately report on their tax returns the total amount of payments to independent contractors. We previously reported that clarifying the definition of independent contractors and extending reporting requirements for those contractors could possibly increase tax revenue by billions of dollars. Two of the legislative proposals call for more information reporting on payment card transactions from certain businesses and on payments by federal, state, and local governments to businesses. Information reporting has been shown to significantly reduce noncompliance. Although information reporting is highly effective in encouraging compliance, such reporting imposes costs and burdens on the businesses that implement it. However, information reporting is a way to significantly increase voluntary compliance without increasing IRS’s budget. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee my have at this time. For further information regarding this testimony, please contact James R. White, Director, Strategic Issues, on 202-512-9110 or [email protected] David A. Powner, Director, Information Technology Management Issues, on 202-512-9296 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Joanna Stamatiades, Assistant Director; Amanda Arhontas; Paula Braun; Terry Draver; Paul Foderaro; Chuck Fox; Tim Hopkins; Kathryn Horan; Hillary Loeffler; Sabine Paul; Cheryl Peterson; Neil Pinney; Steve Sebastian; Tina Younger. In 2002, Internal Revenue Service (IRS) entered into a 3-year agreement with the Free File Alliance, a consortium of 20 tax preparation companies to provide free electronic filing to taxpayers who access any of the companies via a link on IRS’s Web site. The 2002 Free File Agreement stated that as part of the agreement, IRS would not compete with the Consortium in providing free, online tax return preparation and filing services to taxpayers. IRS and the Consortium amended the agreement in 2005. Key differences between the two agreements are: the new income limitation of $50,000 and new language in the amendment that states the Alliance members must disclose early on if state tax return services are available, and if so, whether a fee will be charged for such services; and provide the necessary support to accomplish a customer satisfaction survey. It also added language pertaining to the marketing and offering of Refund Anticipation Loans (RALs) whereby: No offer of free return preparation and filing of an electronic return in the free file program shall be conditioned on the purchase of a RAL; and RALs will be offered with clear language indicating, for example, that RALs are loans, not a faster way of receiving an IRS refund; must be repaid even if the IRS does not issue a full refund; are short-term loans interest rates may be higher and customers may wish to consider using other forms of credit; and may be offered but not promoted. IRS tests each Consortium member’s software to ensure it is in accordance with the Free File provisions, including those cited previously, before allowing a link to IRS’s Web site. In addition, IRS officials monitor complaints about the Free File program received via IRS.gov, including allegations regarding false, deceptive, or misleading information or advertising. While IRS does not track the number of complaints it receives, according to IRS officials, most of the complaints received thus far were a result of the taxpayer either not carefully reading or following instructions, or incorrectly entering information. GAO conducted limited testing of the Free File program and found that the Consortium members were complying with the terms outlined in the amended Free File agreement pertaining to RALs. The amended Free File agreement contains provisions that enable IRS to monitor taxpayer participation beginning in the 2006 filing season, unlike prior years where Free File Alliance members self-reported filing figures. IRS also tracks the number of free file users who are accepting any financial products, such as RALs. As of April 17, IRS reported that 207,814 free file returns accepted financial products. This represents about 5 percent of all returns filed through the Free File program. The number of taxpayers using free file to electronically file their individual income tax returns has increased steadily from 2.8 million in 2003, to 3.5 million in 2004, to 5.1 million in 2005. The substantial growth between 2004 and 2005 was due to, in part, several Consortium members offering free filing to all taxpayers through the free file program regardless of their income in 2005. However, according to IRS officials, the lack of income limitation created conflict among Consortium members as it put pressure on all Alliance members to offer free service, which may not have been economically feasible for some, threatening competition if members were to drop out of the Alliance. IRS projected that 6.1 million taxpayers would use free file in 2006. However, this projection may be optimistic, because between January 1 and April 13, IRS has reported receiving only 3.5 million free file returns compared to 4.6 million during the same period last year, a decline of 24 percent. According to IRS officials, contributing factors to this decline are, in part, due to decreased press attention and advertising by the participating companies and the income limitation. The income limitation provides coverage to 70 percent of the nation’s taxpayers, or more than 92 million people. This coverage includes taxpayers with an adjusted gross income of $50,000 or less. For fiscal year 2007, the Internal Revenue Service (IRS) has requested $10.7 billion in its appropriation accounts. This request consists of $10.6 billion in direct appropriations and $135 million in revenue from new user fees, which IRS will commit to taxpayer service activities in its Processing, Assistance, and Management (PAM), Tax Law Enforcement (TLE), and Information System (IS) accounts. In addition, IRS is projecting to collect and use $282 million from existing user fees and reimbursable agreements with states and other federal agencies. This brings IRS’s proposed fiscal year 2007 budget to approximately $11 billion (a 1.6 percent increase over fiscal year 2006). After adjusting for expected inflation, IRS’s $11 billion budget request reflects a slight decrease from last year’s enacted budget. IRS’s enacted budgets for its appropriation accounts from fiscal years 2002 through 2007 are shown in table 4. IRS’s enacted budget has increased almost 8 percent since fiscal year 2002. By far, the biggest percentage increase has been in TLE—almost 21 percent—and is reflective of the shift in resources devoted to TLE from PAM during this period. The biggest percentage decrease was in the Business Systems Modernization (BSM) program, down almost 58 percent. Tables 5 and 6 show IRS’s enacted and actual Full-time Equivalents (FTEs) for fiscal years 2002 through 2007. Overall, actual FTEs tend to be lower than enacted FTEs due in part to the way IRS funds its unbudgeted costs. When both enacted and actual FTEs are considered, FTEs for PAM have steadily decreased and, for the most part, FTEs for TLE have increased since fiscal year 2002. However, steady trends are not apparent when comparing enacted and actual FTEs in IRS’s IS account. For example, when enacted FTEs are considered, IS staffing appears to fluctuate up and down between fiscal years 2002 through 2007; yet, when actual FTEs are considered, IS staffing decreased from fiscal year 2002 through 2005 and increased from fiscal years 2005 to 2006. IRS officials attribute these fluctuations in FTEs to reorganizations and other factors. Tables 5 and 6 also show significant differences in percentage changes between enacted and actual FTEs in some of IRS’s appropriations accounts from fiscal years 2006 to 2007. The enacted level of FTEs is the number IRS projected it could support given the level of funding the Congress enacted. According to IRS officials, enacted levels tend to be overstated compared to actual FTEs for several reasons. First, IRS, like most federal agencies, does not receive its budgets when expected and cannot fill all positions. Also, as the costs of maintaining current FTE levels increase annually, IRS is not able to realize all of the FTEs it projects to fund with the appropriations the Congress enacts. In its fiscal year 2006 budget request, IRS showed its budget distributed by taxpayer services and enforcement, including IS funding for those areas, because the agency’s current appropriation accounts are not divided clearly between taxpayer service and enforcement. As table 7 shows, funding for enforcement increased 15 percent between fiscal years 2004 and 2007 to $6.96 billion, while funding for taxpayer service declined over 3 percent to almost $3.6 billion. In its 2007 budget request, the Internal Revenue Service (IRS) is proposing to save over $121 million and 1,424 Full-time Equivalents (FTEs) and reinvest over $12 million and 11 FTEs. Based on IRS’s ability to achieve prior year savings and reinvestments as shown in table 8, we have a basis to believe that IRS will achieve most, if not all, of these savings. For example, IRS reported it realized 88 percent of its anticipated budget savings and 86 percent of its anticipated staff savings for savings identified in its fiscal year 2004 budget request, and IRS reported exceeding savings targets in fiscal year 2005. Of the 50 states, 12 have electronic filing mandates for tax preparers in effect for the 2006 filing season (see fig. 4). The mandates differ in their implementation dates and schedules, thresholds for filing, and penalties. The differences between mandates may affect the magnitude of electronic filing increases in each state. We recently reported that state mandates encourage electronic filing of federal tax returns and recommended that IRS develop better information about the costs to paid tax preparers and taxpayers of mandatory electronic filing of tax returns for certain categories of tax preparers. These mandates require tax practitioners who meet certain criteria, such as filing 100 individual state tax returns or more, to file individual state returns electronically. Between tax years 2001 and 2004, electronic filing had grown in the 9 states with mandates from an average of 36.7 percent to 56.8 percent, or an increase of over 20 percentage points, compared to an increase of 14 percentage points for the 41 non mandated states over the same time period. We expect this trend to continue as 3 additional states—New York, Utah and Connecticut—implemented mandates in time for the 2006 filing season. Of these 3 states, New York may have the most to gain because it currently has the lowest rate of electronic filing rate, with fewer than 38 percent of its nearly 9 million federal individual income tax returns electronically filed last year. Tax Administration: IRS Improved Some Filing Season Services, but Long-Term Goals Would Help Manage Strategic Trade-offs, GAO-06-51 Washington, D.C.: November 14, 2005. Tax Administration: IRS Improved Performance in the 2004 Filing Season, but Better Data on the Quality of Some Services Are Needed, GAO-05-67 Washington, D.C.: November 10, 2004. Tax Administration: IRS’s 2003 Filing Season Performance Showed Improvements, GAO-04-84 Washington, D.C.: October 31, 2003. IRS’s 2002 Tax Filing Season: Returns and Refunds Processed Smoothly; Quality of Assistance Improved, GAO-03-314 Washington, D.C.: December 20, 2002. Internal Revenue Service: Assessment of Fiscal Year 2006 Budget Request, GAO-05-566 Washington, D.C.: April 27, 2005. Internal Revenue Service: Assessment of Fiscal Year 2006 Budget Request and Interim Results of the 2005 Filing Season, GAO-05-416T Washington, D.C.: April 14, 2005. Internal Revenue Service: Assessment of Fiscal Year 2005 Budget Request and 2004 Filing Season Performance, GAO-04-560T Washington, D.C.: March 30, 2004. Tax Gap: Making Significant Progress in Improving Tax Compliance Rests on Enhancing Current IRS Techniques and Adopting New Legislative Actions, GAO-06-453T Washington, D.C.: February 15, 2006. Tax Gap: Multiple Strategies, Better Compliance Data, and Long-Term Goals Are Needed to Improve Taxpayer Compliance, GAO-06-208T Washington, D.C.: October 26, 2005. Tax Compliance: Reducing the Tax Gap Can Contribute to Fiscal Sustainability but Will Require a Variety of Strategies, GAO-05-527T Washington, D.C.: April 14, 2005. Taxpayer Information: Data Sharing and Analysis May Enhance Tax Compliance and Improve Immigration Eligibility Decisions, GAO-04- 972T Washington, D.C.: July 21, 2004. Compliance and Collection: Challenges for IRS in Reversing Trends and Implementing New Initiatives, GAO-03-732T Washington, D.C.: May 7, 2003. Financial Audit: IRS’s Fiscal Years 2005 and 2004 Financial Statements, GAO-06-137 Washington, D.C.: November 10, 2005. Internal Revenue Service: Status of Recommendations from Financial Audits and Related Financial Management Reports, GAO-05-393 Washington, D.C.: April 29, 2005. Financial Audit: IRS’s Fiscal Years 2004 and 2003 Financial Statements, GAO-05-103 Washington, D.C.: November 10, 2004. Internal Revenue Service: Status of Recommendations from Financial Audits and Related Financial Management Reports, GAO-04-523 Washington, D.C.: April 28, 2004. Financial Audit: IRS’s Fiscal Years 2003 and 2002 Financial Statements, GAO-04-126 Washington, D.C.: November 13, 2003. Business Systems Modernization: IRS Needs to Complete Recent Efforts to Develop Policies and Procedures to Guide Requirements Development and Management, GAO-06-310 Washington, D.C.: March 20, 2006. Business Systems Modernization: Internal Revenue Service’s Fiscal Year 2006 Expenditure Plan, GAO-06-360 Washington, D.C.: February 21, 2006. Business Systems Modernization: Internal Revenue Service’s Fiscal Year 2005 Expenditure Plan, GAO-05-774 Washington, D.C.: July 22, 2005. IRS Modernization: Continued Progress Requires Addressing Resource Management Challenges, GAO-05-707T Washington, D.C.: May 19, 2005. Business Systems Modernization: IRS’s Fiscal Year 2004 Expenditure Plan, GAO-05-46 Washington, D.C.: November 17, 2004. Business Systems Modernization: Internal Revenue Service Needs to Further Strengthen Program Management, GAO-04-438T Washington, D.C.: February 12, 2004. IRS Modernization: Continued Progress Necessary for Improving Service to Taxpayers and Ensuring Compliance, GAO-03-796T Washington, D.C.: May 20, 2003. Paid Tax Return Preparers: In a Limited Study, Chain Preparers Made Serious Errors, GAO-06-563T Washington, D.C.: April 4, 2006. Tax Administration: IRS Can Improve Its Productivity Measures by Using Alternative Methods, GAO-05-671 Washington, D.C.: July 7, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government, GAO-05-325SP Washington, D.C.: February 2005. High Risk Series: An Update, GAO-05-207 Washington, D.C.: January 21, 2005. Internal Revenue Service: Challenges Remain in Combating Abusive Tax Schemes, GAO-04-50 Washington, D.C.: November 19, 2003. Tax Administration: IRS Is Implementing the National Research Program as Planned, GAO-03-614 Washington, D.C.: June 16, 2003. Tax Administration: IRS Needs to Further Refine Its Tax Filing Season Performance Measures, GAO-03-143 Washington, D.C.: November 22, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Internal Revenue Service's (IRS) filing season performance affects tens of millions of taxpayers who expect timely refunds and accurate answers to their tax questions. IRS's budget request is a planning tool showing how it intends to provide taxpayer service and enforce the tax laws in 2007. It is also the first in a series of annual steps that will determine whether IRS meets its new long-term goals of increasing tax compliance and reducing taxpayers' acceptance of cheating on their taxes. Tax law enforcement remains on GAO's list of high-risk federal programs, in part, because of the persistence of a large tax gap. IRS recently estimated the gross tax gap, the difference between what taxpayers owe and what they voluntarily pay, to be $345 billion for 2001. GAO assessed (1) IRS's interim 2006 filing season performance; (2) the budget request; and (3) how the budget helps IRS achieve its longterm goals. GAO compared performance and the requested budget to previous years. IRS has improved its filing season performance so far in 2006, continuing a trend. More refunds were directly deposited, which is faster and more convenient. Electronic filing continued to grow, but at a slower rate than in previous years. IRS's two most commonly used services--telephone and Web site assistance--continued to improve. IRS estimates that the accuracy rate for its telephone answers is now at 90 percent or more. Taxpayers continued the recent pattern of using IRS's walk-in sites less and community-based volunteer sites more. While millions of taxpayers use chain paid tax preparers, taxpayers may not be receiving accurate and complete assistance, putting them at risk of owing back taxes, interest, and penalties. The 2007 budget request of $11 billion (a small decrease after adjusting for inflation) sets performance goals for service and enforcement that are all equal to or higher than the 2006 goals. The budget reduces funding by 15 percent for Business Systems Modernization, the ongoing effort to replace IRS's aging information systems. The reduction could impede progress delivering improvements to taxpayers. The budget request identified over $121 million in savings; however, opportunities exist for further savings. For example, IRS officials told us IRS's 25 call centers have underutilized space. Those centers could be consolidated without affecting service to taxpayers. Achieving IRS's long-term compliance goals will be challenging because the tax gap has persisted for many years at about its current level. In addition, because the effect of taxpayer service and enforcement on compliance has never been quantified, IRS does not have a data-based plan demonstrating how it will achieve its goals. Nor does IRS have a plan for measuring compliance by 2009, the date for achieving the goals. Reducing the tax gap will likely require new and innovative solutions such as simplifying the tax code, increasing income subject to withholding, and increasing information reporting about income.
Employers can sponsor two broad types of retirement plans, referred to in the Employee Retirement Income Security Act of 1974 (ERISA) as pension plans: (1) DB plans, which promise to provide benefits generally based on an employee’s years of service and frequently are based on salary, regardless of the performance of the plans’ investments, and (2) DC plans, in which benefits are based on contributions and the performance of the investments in participants’ individual accounts. Since the inception of ERISA, employers have shifted away from sponsoring DB plans and moved toward sponsoring DC plans. The dominant type of DC plan is a 401(k) plan. In 2011, U.S. employers sponsored over 510,000 401(k) plans covering more than 61 million workers with more than $3.1 trillion in plan assets. Unlike DB plans, in which plan participants are eligible for a specific payment for life, 401(k) plan participants typically must make their own, sometimes complex, choices about their account balance both before and during retirement. For example, participants need to decide how much to contribute, how to invest, and how to spend down savings in retirement. In the United States, 401(k) plans are subject to provisions of ERISA, which are generally enforced by DOL’s Employee Benefits Security Administration (EBSA) and Treasury’s Internal Revenue Service (IRS). To carry out its responsibilities under ERISA, EBSA issues regulations and other guidance. The IRS, under Title II of ERISA, and subsequent amendments to the Internal Revenue Code, generally is responsible for ensuring that plans meet certain requirements for tax qualification, among other things. Tax advantages are intended to encourage employers to establish and maintain pension plans and encourage employees to participate in the plans. For example, employer contributions to qualified plans are, within limits, tax deductible and, in general, contributions and investment earnings are not taxed as income until the employee withdraws them from the plan. In recent years, EBSA and IRS have increased their focus on the decumulation phase of 401(k) plans, exploring ways to facilitate access to lifetime retirement income products, such as annuities. For example, in 2008, EBSA issued a new safe harbor regulation for selecting annuity providers as an optional way for sponsors and other plan fiduciaries of DC plans to satisfy their responsibilities under ERISA to act prudently and in the interest of the plan’s participants and their beneficiaries. Subsequently, in 2010, EBSA and IRS sought public comments to aid the agencies in thinking about how they might facilitate access to and the use of lifetime income streams after retirement. More recently, IRS proposed a new regulation in February 2012 to encourage retirement plans to offer a longevity annuity option, also referred to by some as deeply deferred annuities, in which payments initiate at an advanced age, such as age 80. This regulation has not been finalized. In addition, other entities play a role in overseeing and monitoring insurance companies that sell annuities—guaranteed income payments for life or a specified term—either through 401(k) plans or in the retail market to individual investors. For example, state insurance regulators are responsible for enforcing state insurance laws and regulations, including those covering the licensing of agents, reviewing insurance products (including annuities) and their rates, and examining insurers’ financial solvency and market conduct. In addition to state insurance regulators, the National Association of Insurance Commissioners (NAIC)—a voluntary association of the heads of insurance departments from the 50 states, the District of Columbia, and five U.S. territories— plays a role in insurance regulation. Although NAIC is not a regulator, it provides guidance and services designed to more efficiently coordinate interactions between insurers and state regulators. Additionally, although the Federal Insurance Office (FIO) is not directly involved in the supervision of private and public pension plans, it has a role in coordinating and developing insurance policies. FIO advises the Secretary of the Treasury on domestic and international insurance policy issues; and consults with states regarding insurance matters of national and international importance. FIO also coordinates federal efforts and develops federal policy on prudential aspects of international insurance matters. For example, FIO has been working with international regulators to develop a methodology and indicators for identifying global systemically important insurers, meaning that their failure might threaten global financial stability. In addition to the United States, the selected countries for this review— Australia, Canada, Chile, Singapore, Switzerland, and the United Kingdom—each have extensive or growing DC retirement systems. As shown in table 1, the key features of each country’s predominant DC plan vary. At retirement, participants in DC plans enter the distribution, or spend- down phase, during which they use their savings to meet their retirement needs. DC participants are typically offered one or more of three main types of spend-down options at retirement, although the availability of these options and the rules related to their use vary by country: Lump sum payments are a single distribution of some or all of a participant’s retirement savings. For example, in the United States, participants generally only have the option to withdraw all or part of their account balance, which can, at the participant’s discretion, then be used to invest in retail financial markets, among other things. Programmed withdrawals are a series of fixed or variable payments from a participant’s account and may be administered either within a plan or in retail financial markets. Programmed withdrawals attempt to produce relatively stable annual income for the lifetime of the retiree. For example, a participant could set up their own programmed withdrawal strategy and decide to draw down a certain percentage of their assets each month or year to meet their retirement needs. Governments could also set minimum and maximum drawdown limits. Annuities are guaranteed payments that are normally secured through a contract with an insurance company for either a set period or for the participant’s lifetime. Annuities come in a variety of forms. For example, deferred annuities allow participants to delay the start date of payments to a later point in retirement. Variable annuity payments are not guaranteed and vary based on the performance of underlying investments selected by the participant. Factors that affect annuity payment calculations can vary by country Annuity payment amounts may depend on a number of factors beyond simply the amount of retirement savings a participant can use to purchase the annuity less fees charged by insurers. These factors vary by country and may, depending on the country, include such factors as: Age: Participants can generally expect to receive higher payments the higher the age at which payments commence. Gender: Men typically receive higher payments compared to women of the same age, since studies have shown that women tend to live longer than men. This may not apply, however, in countries and market sectors that require annuities to be offered on a gender-neutral basis—such as for an annuity offered through a plan in the United States. Health: Participants with certain health conditions or lifestyle habits such as smoking can in some cases obtain higher payments. Interest rates: Participants who purchase annuities during periods of low interest rates may lock in lower payments than if they had purchased during a period of higher interest rates. Participants face trade-offs in choosing among spend-down options because no single option protects against all the various risks participants may encounter in retirement, such as longevity and inflation risk. To choose suitable options, participants approaching retirement must weigh the strengths and shortcomings of each against their own unique retirement needs. They also need to consider their other sources of retirement income, including government benefits and personal savings. Table 2 illustrates some of the trade-offs participants face in choosing among spend-down options. To effectively weigh the trade-offs between spend-down options, participants have to understand how interest rates, investment returns, and inflation, among other key factors, can affect their retirement income. Figure 1 below illustrates how investment returns can affect the retirement income a programmed withdrawal provides. In contrast, an annuity, once purchased, protects against investment risk over the course of retirement. Purchasing an annuity, however, may be irreversible, thereby potentially limiting the ability to leave an inheritance or cover unforeseen costs in retirement. And interest rates at the date of purchase can affect the size of payments from an annuity. In all cases, even modest inflation can erode the purchasing power of retirement income over the course of an extended retirement. In the last decade, some experts have emphasized an aspect of behavioral economics to help improve retirement outcomes for participants in 401(k) plans. In these plans, individuals face greater responsibility and risks in managing their retirement income needs both during the accumulation and spend-down phases, such as having insufficient savings to support a comfortable retirement and not having the right tools to make sound financial decisions about how to use what they have accumulated. Our prior work found literature arguing that information or financial education, including improving financial literacy, is not necessarily the best or only approach to ensuring good retirement outcomes. They have proposed alternative strategies, sometimes in combination with information and financial education, that aim to help participants reach financial goals by making use of insights from behavioral economics—which blends economics and psychology—that indicate people are often prone to inertia and procrastination, and have difficulty processing complex information. For example, participants in 401(k) plans do not always receive timely, clear information to help them understand their options, as well as the risks and trade-offs of managing income throughout retirement. Even when such information is available, participants do not always make choices that are in their best interest. Insights from behavioral economics have led to strategies that use inertia to bolster or facilitate positive policy outcomes, such as default enrollment in 401(k) plans to increase retirement savings. Researchers and practitioners have proposed similar interventions for the spend-down phase, such as encouraging or “nudging” some participants toward annuitizing a portion of their retirement wealth. Although DC systems in countries we reviewed often varied in their features, structure, and regulatory oversight, all six countries ensured that participants have institutionally-facilitated access to a mix of spend- down options through their plans to help them manage retirement risks. Five of the six countries make the three main spend-down options—lump sum payments that offer participants flexibility to access and use their retirement savings, programmed withdrawals that allow participants to keep their account balances invested and draw on savings as needed, and annuities that offer guaranteed retirement income and protection against investment risk—explicitly available to DC participants. The countries we reviewed also allow participants to combine spend-down options to meet changing needs over the course of an extended retirement. For example, within certain conditions, participants in Singapore can take a partial lump sum after age 55 and purchase an annuity that guarantees them some amount of retirement income for life. Other sources of retirement income, such as those from national pension systems, can address some retirement risks, but all the countries we reviewed recognize the importance of their DC system in improving retirement outcomes and have taken steps to increase participant access to multiple spend-down options. For example, Switzerland has a national pension system that provides some level of guaranteed retirement income, similar to the annuity option available to participants through their plan. Even with this system in place, in 2005 Switzerland required plans to offer participants at least 25 percent of their account balance as a lump sum payment, an option that had previously only been made at the discretion of the plan. One financial advisor we spoke with told us this policy change benefits participants who decide they already have sufficient retirement income and would accept lower annuity payments through their plan in exchange for more access to their savings at retirement. National pension system in Switzerland Switzerland’s national pension system has two components: an earnings-related public pension funded mainly by payroll contributions and an income-tested supplementary benefit funded by general revenues. Payment amounts for the earnings-related benefit are based on lifetime earnings. Payments in retirement are indexed to prices and earnings. For information about the national pension systems in the other countries we reviewed, see appendix II. Historically, five of the six countries tended toward one spend-down option, but all have more recently expanded access to additional options, in some cases substantially altering the retirement decisions of participants, as shown in figure 2. For example, Canada had traditionally “locked in” participants’ assets for the purpose of securing a stream of retirement income, which participants could previously only do by purchasing an annuity. In the 1990s, however, Canada expanded the spend-down options that would meet this locking-in requirement to include programmed withdrawals. Government officials and a financial professional in Canada told us this was done in response to participants’ reluctance to lose access to their assets and their desire to leave an inheritance. Programmed withdrawals have since become the most widely used spend-down option in Canada. While each of the six countries we reviewed has multiple spend-down options to help DC plan participants tailor a retirement income strategy to fit their unique circumstances, in the United States, 401(k) plans generally only offer participants one option—a lump sum payment—leaving participants on their own to identify and develop strategies to manage retirement risks. Although some 401(k) plan sponsors may offer all three of these options to participants, most do not offer annuities or programmed withdrawals similar to the systematic or formalized options offered in other countries. As a result, U.S. plan participants typically take a lump sum payment, and then have to make difficult choices in order for their financial assets to last throughout retirement. We also found that three of the six countries conducted reviews of participants’ retirement needs and concerns in the spend-down phase, which resulted in governments taking actions that led to additional spend- down options to meet those specific needs. For example, in 2007 and 2008, Singapore conducted a review of its retirement phase and determined a programmed withdrawal option designed to last approximately 20 years was not adequate to meet the needs of many participants based on their increasing longevity (see fig. 3 for the key steps taken.) Singapore consulted with a broad spectrum of stakeholders, including working adults, community and social sector leaders, and insurance companies and industry associations, in designing a new spend-down option to better meet participants’ needs. Singapore concluded from this consultation that Singaporeans wanted an annuity scheme that provided an income for life and that would replace the programmed withdrawal option, where payouts stop completely after about 20 years. As a result, in 2009 Singapore began offering government-managed annuities, known as the Central Provident Fund Lifelong Income Scheme for the Elderly (CPF LIFE), on an opt-in basis. In 2013, Singapore set one of these CPF LIFE annuities as the default option for certain participants. Similarly, the United Kingdom undertook a review to address concerns with the spend-down phase, which resulted in policy changes that provided increased flexibility for participants to choose how to spend down their account assets. Specifically, the United Kingdom determined that its long-standing rules that effectively required individuals to purchase an annuity by the age of 75 were too restrictive for an increasing number of participants—who were both living longer and working longer—and for some was acting as a barrier to saving in DC plans. As part of the review, the United Kingdom consulted with a range of stakeholders, including individuals, service providers, and industry and consumer representatives, on how to ensure that participants make appropriate spend-down choices in the absence of the annuity requirement. Based on this review, the United Kingdom removed its annuitization requirement in 2011, making this option voluntary, and expanded access to programmed withdrawals in order to give participants more flexibility to choose spend-down options that best suit them. As a result, participants can remain invested in their plans and manage the timing and amount of withdrawals from their plans, within certain limitations, or delay the purchase of an annuity until interest rates are more favorable. Government officials and experts we spoke to agreed that most participants at present have not accumulated sufficient retirement savings to be eligible for and make full use of programmed withdrawals as a spend-down option. According to government officials and one financial service provider, annuities remain the most widespread form of distribution from DC plans in the United Kingdom. Despite current low rates of saving, the structure is in place to give workers more flexibility in planning for retirement as the DC system in the United Kingdom matures and workers accumulate larger account balances. In the United States, as shown in figure 4, similar to the steps taken by some of the other countries, DOL and Treasury have recently begun to explore the possibility of expanding 401(k) participants’ access to spend- down options through their plans that offer lifetime income, but they have not yet made a concerted effort to help plan sponsors offer a mix of options to their participants. Most 401(k) plans typically only offer participants the option to take their benefits as lump sum payments, which leaves participants on their own, or to the retail market, to ensure their savings last throughout retirement. In recognition of the potential risk workers face under current conditions, DOL and Treasury undertook efforts in 2010 to collect detailed information from the public to determine what, if any, steps they could take to enhance the retirement security of 401(k) plan participants by facilitating access to, and use of, options designed to provide a stream of lifetime income after retirement. Through this process, DOL and Treasury received feedback from financial service providers and other stakeholders on the challenges plan sponsors face in offering a mix of spend-down options to participants through their respective plans. But the feedback also highlighted some existing flexibilities 401(k) plan sponsors have to offer multiple spend- down options. One large service provider described a range of spend- down options it offers participants, including programmed withdrawals and investment strategies that generate retirement income, such as mutual funds designed with built-in payment streams. Another service provider discussed how technology enables sponsors to offer participants access to competitive online annuity markets that can provide the benefit of competition and improved pricing at a reasonable cost. Since 2010, DOL and Treasury have begun taking steps to improve the information participants receive about the spend-down phase and expand access to spend-down options. For example, in May 2013, DOL proposed rules that would require 401(k) participants’ benefits statements to show how their account balances would translate into a stream of lifetime income payments. And in February 2012, Treasury proposed rules that would make it easier to purchase deeply deferred annuities. However, in contrast to the steps taken by other countries, DOL and Treasury have not yet used the information they collected to develop and implement a strategy that would help 401(k) plan sponsors leverage existing flexibilities and address challenges in offering their participants a mix of spend-down options that meet varying retirement needs. Without additional steps by U.S. regulators to help ensure sponsors are providing participants with access to a mix of options, most 401(k) plan participants are likely to continue taking a lump sum payment at retirement, regardless of whether this option is the most appropriate to meet their particular needs. By doing so, participants are at risk of either spending their assets too quickly and outliving their savings, or being too conservative with their assets, forcing them to accept a lower standard of living than necessary. Low rates of saving for retirement in the United States highlight the need to make the most of participants’ 401(k) account balances. Simulations conducted by the Employee Benefit Research Institute in 2010 indicate that nearly half of workers in the United States above the age of 35 may be at risk of not having enough retirement income to meet their needs. Moreover, this problem may be more acute for those with lower incomes. When ranked by pre-retirement income, households in the lowest one-third may be at risk more than 70 percent of the time. Although all of the countries we reviewed are similarly coping with the challenge of small account balances, we found that offering a mix of options allows participants to select options that can best meet their unique circumstances. Most of the countries we reviewed used various mechanisms such as communicating simple and consistent information in a timely manner to help participants make more informed spend-down decisions. For example, in Australia, the government developed a booklet for participants that lays out the various spend-down options as they approach retirement with simple illustrative examples, together with the benefits and drawbacks of each, as shown in figure 5. As a result, participants have access to information to plan for retirement by weighing the pros and cons of different ways to manage and make the most of their retirement savings. Additionally, in the United Kingdom, plans are required to send a “wake up” information pack to participants 4 to 6 months before their retirement date that explains the key spend-down options, which has helped participants better understand their options. The pack is also used to help plans meet the requirement to inform participants of their right to buy an annuity from a provider other than the one that holds their pension savings. In consultation with the government, the Association of British Insurers (ABI) developed a code of conduct, which requires its members—insurance companies—to communicate certain information to participants. According to ABI’s code of conduct, as shown in figure 6, the information in the pack should be in plain, clear language and include a cover letter detailing the participant’s options and additional literature, such as a brochure on retirement facts by the U.K. government. It points participants to sources of advice and support, such as regulated advisers and independent advice organizations. In addition, the U.K. financial regulator requires that plans send participants a follow-up pack 6 to 10 weeks prior to retirement, which emphasizes the need to make a decision, among other things. ABI expects its members, who are service plan providers, to send participants communications that are in line with these practices. U.K. pension regulators told us that they also expects trust-based plan communications to meet the standards developed by the ABI, so participants in plans administered by trusts and insurance companies receive comparable information. According to a 2009 ABI study, the wake up pack increased the proportion of participants feeling quite or very comfortable about understanding their retirement options from 70 to 80 percent. Their study also found that participants generally thought the information in the pack was relevant to their situations. In addition, a 2013 ABI study found that 85 percent of people who read pre- retirement information agreed that information from their provider made them more aware of their options at retirement. Countries we reviewed did not rely solely on the disclosure of information but also found ways to show how the information provided to participants applied to their particular circumstances. In particular, two of the six countries—Chile and the United Kingdom—help participants see how their savings would translate into a stream of retirement income by requiring plans to provide participants with monthly or yearly projections in their benefit statements, a comparison which can be useful for making retirement decisions. Since 2005, Chile has required pension plans to estimate retirement income in the annual statements using multiple assumptions about the participant’s expected retirement date, as shown in figure 7. For older participants about 10 years from retirement, one estimate assumes the participant stops working at the normal retirement age, while a second estimate assumes that the participant postpones retirement and continues to make contributions for 3 more years. According to Chilean pension officials, the aim of these scenarios is to help participants compare their current income to expected future retirement income and to encourage them to make additional voluntary contributions in case their retirement income looks low. In fact, a 2011 study of the impact of providing these personalized pension projections to Chilean participants suggests that the receipt of this information resulted in participants making additional contributions to improve their retirement prospects. Finally, enabling participants to compare across various spend-down options and products within an option also helps them to plan for retirement. For example, we found that two of the six countries—Chile and the United Kingdom—provide electronic quotations to help near- retirees make informed decisions. The Chilean government introduced its central electronic quotation system (SCOMP) in 2004, which lowers participant search costs and gives them easy access to transparent, reliable information by showing different spend-down options in a comparable format. The SCOMP offers individuals who are ready to retire comparable quotations for the programmed withdrawals and annuity options in a single document replacing the traditional way people looked for and bought retirement products through brokers. The SCOMP has enabled participants to more easily receive a larger amount of quality information by making all possible offers available to them without the need to hire financial intermediaries such as sales agents and brokers. A similar benefit is available in the United Kingdom through the Money Advice Service set up by the government, which allows participants to compare a range of illustrative annuity quotes on a voluntary basis. This service offers free and unbiased advice on financial matters. Specifically, Money Advice Service’s website provides comparable annuity quotes from participating insurance companies, once an individual answers a set of standard questions. The individual is then presented with a table listing insurance companies and his or her annuity quotes, both in monthly, quarterly, and yearly amounts. The U.K. developed the service to provide those who do not know how to or cannot afford to get advice about annuities from other sources, particularly those with relatively small account balances. U.K. regulators told us that although participants may benefit from seeing quotes, most participants’ plan balances are too small for the service to be useful for them. However, this service may become more important as more workers are automatically enrolled in plans and assets grow. In the United States, plans must provide participants certain disclosures related to their benefits and options under the plan. For example, benefit statements with the individual’s account balance are required to be provided to 401(k) plan participants at least quarterly. However, the disclosures do not require that participants receive information about their spend-down options before they make a decision. As a result, 401(k) plan participants may lack key information, such as the benefits and drawbacks of various spend-down options, needed to make an informed decision, and which could affect their retirement outcomes. In addition, although plan sponsors are permitted to provide information and education to participants, there is no specific guidance for sponsors to provide participants approaching retirement with information on spend- down options. Consequently, some plan sponsors may not provide participants with sufficient information, which may result in participants potentially making decisions that fail to sustain their incomes throughout retirement. DOL has begun to consider including lifetime income projections in benefit statements, which may yield positive outcomes for participants. Currently, 401(k) plan benefit statements generally only show a participant’s account balance, and according to DOL officials participants may have difficulty predicting how long their savings will last. An April 2013 study funded by DOL found that more than half of a sample of plan statements did not include projections of the participant’s balance at retirement, estimates of the retirement income they might be able to expect at retirement, or the “relationship between future outcomes and current savings or investment behavior.” In May 2013, DOL announced that it was considering developing proposed regulations regarding DC plan benefit statements, and solicited public comments to inform its thinking as it considers possible regulatory proposals. One idea is to show a participant’s current account balance and an estimated lifetime stream of payments based on such balance, assuming the participant is at normal retirement age as of the date of the statement, even if he or she is younger. The second idea is aimed at participants who have not yet reached normal retirement age, which shows a projected account balance at normal retirement age and then converts to an estimated lifetime stream of payments. As shown in Chile, including lifetime income projections on plan statements, similar to those being considered by DOL, can help participants envision and understand the lifetime monthly income that can be generated from an account balance. Recognizing that disclosure of information alone is not always sufficient to ensure good retirement outcomes, several countries also use subtle incentives or nudges to promote particular kinds of behavior. In particular, two countries we reviewed—Singapore and Switzerland—use defaults to exploit inertia and promote a specific option. In Switzerland, for example, some plans use annuities as the default spend-down option allowing for the possibility of opting out. One study suggests the take-up rate of annuities is higher in such plans relative to those where the lump sum option is the default. In contrast, the United Kingdom is attempting to address participants’ inertia in shopping for annuities. Until recently, most participants purchased an annuity from their existing service provider, partly because the wake up pack they received shortly before retiring included an application for an annuity with that provider, although it might not have offered the best rate. Working together, the U.K. government and ABI agreed that to promote the search for better annuity rates providers should not include an annuity application in the pack. In addition, tax incentives are also used to encourage or deter the use of certain spend-down options. For example, in 2007, the Australian government removed the favorable tax treatment enjoyed by annuities, which caused the relatively small market for life annuities to virtually disappear, and negatively affected the market for term annuities as well, according to researchers. In addition, researchers found that a series of reforms between 2005 and 2007 contributed to the rapid increase in the demand for programmed withdrawals. For example, the reforms exempted from taxes all earnings or investment returns on assets kept in superannuation funds to support programmed withdrawals, provided that minimum age-based annual drawdowns are satisfied. A report by the Australian pension regulators showed an increase in the proportion of retirement income taken as programmed withdrawals from one third in 2005 to one half by 2012. Over the same period, the proportion of superannuation funds offering programmed withdrawals increased from 35 percent to 80 percent. Three of the countries we reviewed implemented policies to strengthen safeguards regarding financial advisors, so that DC plan participants seeking professional advice have the opportunity to receive fair and objective information that enables them to better manage their retirement income. These countries recognized that spend-down decisions can be complex for participants, who may choose to rely on the assistance of a financial advisor to help make appropriate choices. As a result, countries such as the United Kingdom have taken steps to protect DC participants against inappropriate fees and unscrupulous advice in making spend- down decisions. For example, since January 1, 2013, financial advisors in the United Kingdom are prohibited from receiving commissions on products they sell in order to reduce the sale of costly and unfit products and minimize conflicts of interest. Instead, they now have to explain to participants how much advice will cost and reach agreement with the participant on a price and method of payment, such as an upfront fee or installments. Since 2008, Chile requires that those who previously worked as annuity brokers broaden their knowledge to all aspects of pensions before being registered as advisors. This knowledge enables them to provide participants and their beneficiaries with comprehensive information needed to make informed decisions about their benefits based on their needs and interests. Most of the countries we reviewed imposed income requirements or withdrawal limits on lump sum payments to help those who take this option mitigate retirement risks. For example, in three of the six countries, participants must meet retirement income requirements if they wish to withdraw all or part of their DC plan assets as a lump sum. Specifically, in order for participants in Chile to be eligible to access part of their account, they would need to accumulate enough income to finance 100 percent of the maximum pension with a government supplement and a 70 percent or higher replacement rate based on the last 10 years of their salaries. According to the Chilean pensions regulator, participants who meet these conditions can withdraw any amount in excess of the amount needed to meet the minimum. Under these rules, relatively few participants have taken partial lump sum payments and, of those who did, the amounts were generally quite small. In contrast, Australia does not place an upper limit on the amount participants can withdraw as a lump sum. According to researchers and government officials, participants in Australia who choose this option may run the risk of outliving their savings and becoming solely reliant on government benefits for retirement income. However, some researchers and government officials we spoke with said retirees in Australia may actually be overly cautious in spending down their assets and face the opposite risk of accepting a lower standard of living than is necessary to meet their retirement needs. In the United States, lump sum distributions from 401(k) plans are subject to few restrictions. During our 2011 review of the choices retirees make in managing their assets, we found that some U.S. plan participants prefer to have access to their full 401(k) plan retirement savings, such as by rolling their assets over into an individual retirement account (IRA) or withdrawing them when they retire. At retirement, 401(k) plan participants may withdraw their full account balance as a lump sum and may be required to pay federal and state income taxes on that amount. Of the five countries that offer programmed withdrawals, four countries regulate them to help prevent participants from spending their savings too quickly. For example, Chile’s 2008 pension reforms introduced an additional actuarial factor in the government calculation of the maximum withdrawal amount to account for longevity risks. The maximum withdrawal amount is calculated so payments do not drop below a certain floor, 30 percent of the first payment, by the time participants reaches the age of 98. As a result of these reforms, a retiree’s benefits can vary year to year based on life expectancy estimates and projected returns on investments, but will not fall below a certain amount. Canada also limits the annual amount a participant can withdraw from programmed withdrawal products. Regulators in some Canadian jurisdictions impose an annual maximum withdrawal, which helps to ensure participants have a regular income up to age 90. Most 401(k) plan participants in the United States do not have access to programmed withdrawal products through their plans similar to the ones we found in other countries. The primary programmed withdrawal option that U.S. plan participants may be aware of is the required minimum distributions in tax-deferred plans, such as 401(k) plans, when a participant reaches 70 ½. If a plan offers a systematic withdrawal and the participant designs and manages their programmed withdrawal, the extent to which their strategy is successful at providing income throughout retirement depends on a number of factors, such as the rate of drawdown, future investment returns, inflation, and longevity. For an illustration about how some of these factors may affect retirement income, see the additional materials at http://www.gao.gov/products/GAO-14-9. DOL is in the process of reviewing regulatory barriers that may prevent or discourage 401(k) plan sponsors from offering annuities to participants. In the United States, 401(k) plan sponsors who offer an annuity to their participants through their plan are subject to fiduciary standards in their selection of an annuity provider, which require them to act solely in the interest of the plan’s participants and their beneficiaries. In 2008, DOL issued new final regulations, in the form of a safe harbor, for the selection of annuity providers to help plans prudently choose them and to some extent encourage plans to offer annuities. In proposing the safe harbor DOL speculated that an annuity option would give more participants the opportunity to annuitize their savings, while not impeding them from choosing other options. Even with this safe harbor, according to some U.S. retirement experts and plan service providers, sponsors of 401(k) plans may be hesitant to offer an annuity to their participants because of the additional burden and the potential liability sponsors may incur. In 2010, some respondents to DOL’s request for information on lifetime income options, which included some questions related to the existing safe harbor provision, suggested that DOL revise one condition of the safe harbor that requires sponsors to assess the ability of an insurance company to make all future payments under an annuity contract. Respondents in favor of DOL revising the condition noted that it could help ease some DC plan sponsors’ concerns about offering an annuity as a spend-down option. Additionally, this could help participants who would otherwise purchase an annuity in the retail market gain access to group prices, which are typically lower than individual prices. In December 2012, the ERISA Advisory Council reported on the importance to the retirement community that DOL develop guidance designed to provide meaningful assistance to plan sponsors and participants by modifying the existing safe harbor provision that governs the selection of an annuity provider. In its report, the ERISA Advisory Council also encouraged DOL to partner with NAIC to help with its effort to develop a more workable safe harbor for the selection of annuities and other lifetime income products. In 2012, DOL began working with the NAIC ERISA Retirement Income Working Group as they consider possible options for easing plan sponsor concerns with the financial soundness of annuity providers as related to the DOL safe harbor for the selection of an annuity provider and fiduciary responsibilities. In the countries we reviewed, employers are not expected to assess an annuity provider’s ability to make future payments before contracting with them to offer an annuity through their plans. According to regulators in each of the countries we reviewed, there are no requirements for employers to assess the financial stability of an insurer and countries generally rely on insurance regulations and industry standards to oversee and monitor insurance companies, including those that sell annuities. For example, in Canada, regulators told us that insurers must meet solvency requirements and financial market conduct standards, and all insurers are members of an association that promotes consistent insurance practices and standards by issuing guidelines to its members. Moreover, in four of the six countries regulators and industry practitioners told us that the main responsibility of employers is to contribute to the plan during accumulation, and their obligations typically cease at retirement. In the United Kingdom, where plans are required to inform participants of their right to shop around for an annuity, there are no requirements to assess the potential annuity provider the participant may select through the plan. According to U.K. government officials, plan trustees and participants do not have the expertise to assess the solvency of an annuity provider and are not expected to do so. Furthermore, they noted that this responsibility lies with the bank and insurance company regulators, so plans offering and participants purchasing an annuity are assured of the provider’s solvency. Additionally, because plans do not need to take steps to assess annuity providers, there are no additional administrative costs associated with offering an annuity through the plan. As some of the countries we reviewed illustrate, the absence of a requirement for plan sponsors to assess annuity providers allows plans to more easily provide participants a way to annuitize their retirement savings, if they so choose. The DC systems in the countries we reviewed differ from the United States in a number of ways, and not all the lessons they have to offer would apply here, but their experiences illustrate additional steps DOL and Treasury could consider to help 401(k) plan sponsors make the most of existing flexibilities to offer multiple spend-down options through their plan and broaden participants’ retirement options. By 2015, participants whose retirement savings will be primarily made up of what they accumulated in their 401(k) plans will either be eligible to retire or approaching retirement and many will be faced with the challenge of ensuring they do not outlive their retirement savings. Yet, most of these participants’ 401(k) plans will offer lump sum payments as the primary spend-down option for participants, leaving them to their own devices to figure out how to make their savings meet their needs and last throughout their retirement. Some participants may prefer having complete access to their retirement savings upon retirement. Others, however, could benefit from knowing that there are options offered through the plan—other than a lump sum payment—that might be optimal for them. The experiences of other countries have shown that participants benefit from being able to select among multiple, competitively priced and easily accessible options to address their diverse and changing retirement needs. Although DOL and Treasury have taken some initial steps to explore how the United States might expand the options available to 401(k) participants at retirement, they could do more to ensure that plan sponsors offer flexible options to participants, enabling them to choose the spend-down option that best fits their particular circumstances. Our work also shows that information provided in an easy to understand format at appropriate times before a participant retires can help participants think about and plan for the amount they will need on a monthly basis during retirement. Receiving information on projected income with their account statements helps workers develop and focus on retirement income targets, which in turn can lead to positive changes in participant behavior, such as working longer to improve their prospects for a successful retirement. Comparing available spend-down options and longevity risk can also help participants understand the advantages and disadvantages of each option available to them. In addition, as illustrated in our interactive retirement model, learning to understand the effects of interest rates, inflation, and returns on each type of spend-down option can help participants begin to interpret financial information that can impact their retirement income. Given that DOL is considering various approaches to improve 401(k) plan participants’ understanding of retirement income, it is clear that including information about spend-down options and estimated income can further help participants make more sound decisions about their retirement income. Finally, unlike within 401(k) plans in the United States, in the countries we reviewed annuities were generally an option available to participants through their plans. Because few 401(k) plans in the United States choose to offer annuities to participants, they may be missing a beneficial spend-down option that can provide lifetime income at discounted group prices. As DOL continues to look at the barriers to 401(k) plans offering annuities, the lack of requirements for plan sponsors in other countries to assess an annuity provider’s ability to make future payments—relying instead on the adequacy of insurance regulation—may be something to explore. Our work shows that offering annuities to participants can be done in an easy cost-effective manner enabling participants, if they prefer to consider this spend-down option and not dismiss it as being too expensive to obtain. As DOL and Treasury continue their efforts to determine the actions needed to enhance the retirement security of 401(k) plan participants, we recommend they consider the approaches taken by other countries to formalize access to multiple spend-down options for U.S. plan participants that address varying retirement risks and needs. To the extent possible, lessons from other countries should be used to help DOL and Treasury ensure plan sponsors have information about their flexibilities and the ability to facilitate access to a mix of appropriate options for 401(k) plan participants. In addition, as DOL considers changes to participant benefit statements and other disclosures, we recommend that the Secretary of DOL consider strategies other countries have employed to help participants make sound decisions, such as providing timely information at or before retirement about available spend-down options and projections of future retirement income. We also recommend that as DOL continues to review regulatory barriers to lifetime income options for 401(k) plan participants it consider other countries’ approaches to plans offering annuities, such as their reliance on existing solvency requirements and insurance industry standards to provide assurances rather than place responsibility on plan sponsors to make an assessment of an annuity provider’s financial stability. As DOL considers the approaches of other countries and continues to work with NAIC, which facilitates interactions between insurance companies and state insurance regulators, DOL may wish to consult with the Federal Insurance Office, which coordinates federal efforts on prudential aspects of international insurance matters. We provided a draft of this report to DOL and Treasury for comment. Both agencies provided technical comments that were incorporated, as appropriate. DOL also provided written comments, which are reproduced in appendix III. In its written response, DOL generally agreed with our recommendations and stated that it will take steps to address these recommendations in its ongoing efforts. Moreover, DOL will be focusing on two areas for further consideration that we highlighted in our report, participant education and providing regulatory assistance to plan sponsors considering an annuity option in defined contribution plans. Regarding our recommendation that DOL and Treasury consider the approaches of other countries to formalize access to multiple spend-down options for 401(k) plan participants, DOL commented that expanding access to additional options involves plan design features, which historically have not been subject to DOL’s regulatory authority under Title I of ERISA. DOL noted that it will evaluate whether there are available regulatory approaches to address this recommendation. As we addressed in the report, participants can benefit from having access to a mix of options allows participants to select options that can best meet their unique circumstances. Therefore, we support DOL’s efforts to look into available regulatory approaches to address our recommendation and would encourage the agency to fully consider the steps it could take within the scope of its current authority. With respect to our recommendation to consider the strategies of other countries to help participants make sound retirement decisions, DOL agreed that participants should have timely information at or before retirement about available spend-down options and projections of future retirement income and stated that they will explore their options for actions. As we illustrated in our report, having timely information and projections of future income can help plan participants make more informed retirement saving decisions. We would again encourage DOL to fully consider what steps it could take within the scope of its current authority. Finally, with respect to our third recommendation that DOL consider other countries’ approaches as it reviews regulatory barriers to lifetime income options to 401(k) plan participants, DOL stated that it will evaluate the available regulatory approaches to address our recommendation. We also suggested that DOL consult with NAIC and FIO in its efforts. DOL commented that it began working with a NAIC Working Group to consider possible options for easing plan sponsor concerns with the financial soundness of annuity providers in connection with DOL’s safe harbor for the selection of annuity providers and fiduciary responsibilities. DOL noted that the NAIC Working Group plans on providing EBSA with a list of “best practices” for fiduciaries when asking states for information about insurers, and that it will continue to work with NAIC, Treasury, FIO, and others as it evaluates regulatory approaches to address our recommendation. In response to DOL’s comment, we revised our draft to reflect the status of DOL’s efforts to collaborate with NAIC on the safe harbor for the selection of annuity providers. We commend DOL for its initiative to develop a more workable safe harbor, but continue to encourage DOL to review alternative approaches taken by other countries, such as their reliance on existing solvency requirements and insurance standards. As shown in our report, these approaches can ease the burden on plan sponsors. As a result of DOL’s review and potential action, 401(k) plans could more easily offer annuities to plan participants, which allow them greater flexibility to choose spend-down options that meet their needs and preferences. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Labor, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives for this review were to examine selected countries’ (1) approaches to offering retirement spend-down options; (2) key strategies to help participants make sound decisions; and (3) approaches to regulating and overseeing options. To address these objectives, we conducted in-depth reviews of retirement spend-down phase strategies in countries with extensive or growing defined contribution (DC) systems to determine lessons for the United States based on best practices used in other countries. We selected the following countries for this review: Australia, Canada, Chile, Singapore, Switzerland, and the United Kingdom (U.K.). In selecting the six countries, we considered if the country had (1) developed innovative spend-down phase policies or options; (2) a clearly defined and well- established oversight structure for the DC plan spend-down phase and providers, such as insurance companies; and (3) if the country’s approach to the spend-down phase was uniquely different from the other countries in our scope. To determine which countries to include, we conducted an initial review of the universe of countries with well- developed account based retirement systems. We reviewed comparative studies of DC systems published by GAO, academics, the Organisation for Economic Co-operation and Development (OECD), The World Bank, and other industry experts, such as international benefits consulting firms, to determine which countries have extensive or growing DC systems and if those countries had developed innovative spend-down policies or options. We solicited recommendations on countries with innovative spend-down phase approaches or options from representatives of the OECD, The World Bank, U.S. government officials, academics, and industry practitioners who work with multi-national companies that operate in other countries with DC plans. We then examined the characteristics of each country’s DC system for policies, practices, and requirements related to retirement spend-down options. Based on our initial review, we excluded countries in which the DC plan is structured in a substantially different manner from that of U.S. 401(k) plans or not well- developed relative to other pension options, or in which the spend-down phase was not well-defined. We also determined that the six countries selected could potentially provide lessons for the United States given their experience and unique approach to the spend-down phase. For each of the six countries we selected, we reviewed publicly available research and reports about each country’s pension systems—particularly information on spend-down options, oversight framework, and initiatives to educate participants about options. We also interviewed pension experts, service providers, regulatory agencies and other government officials from the countries selected. We did not conduct an independent legal analysis to verify the information provided about the laws, regulations, or policies of the countries selected for this study. Instead, we relied on appropriate secondary sources, interviews with relevant officials, and other sources to support our work. We submitted key report excerpts to agency officials in each country for their review and verification, and we incorporated their technical corrections as necessary. In addition to addressing these objectives, we also developed a retirement model to help provide contextual information on spend-down options and certain factors that may affect retirement income from these options. The retirement model to allow users to view retirement income under a range of circumstances provided by three main spend-down options: (1) a lump sum payment of their account balance; (2) a programmed withdrawal with payments that are either fixed or set as a proportion of the remaining account balance; and (3) an immediate annuity that makes level payments with no joint or survivor benefit. We developed, in consultation with GAO’s Chief Actuary and an external actuary with expertise in annuity pricing, a formula that was calibrated to approximate annuity payments similar to those found in U.S. retail annuity markets in July, 2013. We simulated retail annuity prices because participants in most 401(k) plans do not have access to institutionally priced annuities within plans. For instance, to purchase an annuity, these participants would instead have to roll over a lump sum payment of their account balance into an IRA and purchase an annuity as an individual investor in U.S. retail markets. To set the ranges within which users can adjust assumptions about interest rates, investment returns and inflation, we analyzed historical data on economic indicators from the Federal Reserve Economic Database, including 10-year Treasury constant maturity rates, and investment returns from Thrift Savings Plan funds that track broad stock, fixed income, and government securities indices. To ensure the annuity rates generated from our model reflected U.S. retail market rates as of July 31, 2013, we calibrated rates using quotes from ImmediateAnnuities.com. We did not assess the reliability of data used to set ranges or calibrate rates, mainly because the information generated from the interactive tool provides illustrative context for the report and is not material to findings, conclusions, or recommendations. In addition, retirement income depicted in the interactive tool does not reflect any federal income taxes on distributions from tax-deferred 401(k) accounts. Dollar amounts in the interactive tool are for illustrative purposes only and should not be considered as quotes for spend-down products or taken as financial advice. We conducted this performance audit from June 2012 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DC Spend-down options Lump sums: Participants may take some or all of their account balance as a lump sum payment. Programmed withdrawals: Participants have the flexibility to structure the timing and amount of withdrawals subject only to annual required minimum withdrawals. These options are known as account- based income streams or account-based pensions. Annuities: Only a small number of annuities are purchased annually, and most are term annuities that provide payments over a fixed period rather than lifetime annuities that guarantee payments for the participant’s life. Providers Spend-down options may be provided by employers or a group of employers, banks, insurance companies, or financial service providers. Regulators The Australian Prudential Regulatory Authority (APRA) is Total system assets: AUD 1.4 trillion ($1.3 trillion) as of 2012 with DC accounts: not available during our review responsible for prudential regulation of all superannuation funds except for self-managed superannuation funds and some public sector schemes. Reforms implemented in July 2013 will enhance APRA’s ability to set and enforce prudential standards. According to officials, APRA’s authority prior to this had largely been limited to issuing, setting conditions on, and revoking licenses to operate superannuation funds. APRA is also responsible for the prudential regulation of other financial services entities, such as banks and insurance companies. Types of plans Industry funds are generally non- profit entities established by a single employer or group of employers. Industry funds generally offer 5 to 15 investment options designed to meet most participants’ needs and in recent years many have become available to the public rather than being restricted to the employees of particular employers. The Australian Securities and Investments Commission licenses and monitors financial services businesses to ensure they operate efficiently, honestly, and fairly. This includes superannuation trustees who hold a license with the commission. It also operates a website that provides consumers information to help them make smart choices about their personal finances, including spend-down decisions, www.moneysmart.gov.au. The Australian Taxation Office is responsible for regulating self- managed superannuation funds, which it does by providing information to trustees on how to set up and manage their fund, checking that the fund complies with standards and laws, and taking enforcement actions to correct breaches of the law. Australia (cont.) National pension system Australia’s Age Pension is a pay-as-you-go public pension funded out of general revenues. The Age Pension is designed to provide biweekly retirement income equal to about 28 percent of average wages, as well as other benefits such as assistance for health care and transportation, to residents of Australia age 65 and older. The Age Pension is means tested for both income and assets, so accumulations in superannuation will reduce biweekly payments from the Age Pension. According to APRA officials, the Age Pension is essentially a “safety-net” provided for those unable to provide for themselves. Retail funds are for-profit entities usually run by banks or investment companies. They often have a large number of mid to higher priced investment options and are open to anyone. At a glance Although, most private voluntary occupational pension plans have been DB, almost all new plans are DC, and a typical DC plan would require employee contributions of 5 percent of earnings and matching employer contribution. Types of plans Voluntary private plans are known as registered private pension plans and can either be DC plans or DB plans. In general, money payable to a member of a plan can only be used to provide retirement income, even if a member leaves the plan. Once benefits are vested, they are usually “locked-in,” which helps to ensure that a member will have regular income at retirement, and creditors cannot seize locked-in pension benefits. market. The most common form is a single life pension with a minimum guarantee of 5 or 10 years of payments. A plan administrator must purchase an annuity for a participant of a DC plan if the member does not elect to exercise of their options. According to regulators, the take-up rate for annuities is relatively low. Providers Programmed withdrawal products are generally offered by financial institutions, such as banks and trustee companies. Private insurance companies offer annuities. Regulators Plans are generally regulated at the provincial level, so policies vary by province and the federal level. At the federal level, the Office of the Superintendent of Financial Institutions regulates and supervises private pension plans in federally regulated areas of employment, such as banking, telecommunications and inter-provincial transportation. It is also the regulator for pension plans established with respect to employment in the Yukon, the Northwest Territories, and Nunavut. Canada (cont.) National pension system Canada has a two-tier social security system that provides the majority of retirement income for the average worker. The system includes the Old Age Security program, which provides monthly payment to residents who are 65 years of age and over as well as a guaranteed income supplement—a family income-tested benefit—to low-income pensioners; and the Canada Pension Plan, which pays a monthly retirement pension to people who have worked and contributed to the Canada Pension Plan. The amount of the Canada Pension Plan benefit a person is entitled to depends on how much and for how long that person contributed to the plan. At a glance Chile instituted a DC pension system in 1981. Workers who entered the labor market after that date are mandated to contribute to an individual DC account. Types of plans Workers are free to choose a licensed for-profit service provider, known as an AFP (Administradoras de Fondos de Pensiones) to manage their individual accounts. Currently, there are six AFPs. New entrants to the labor force are defaulted to the AFP with the lowest fee as determined by a bidding process, which takes place every 2 years. They are required to remain in this AFP for 2 years, after which they can freely choose a service provider. calculation is based on age and gender-specific life expectancy. Participants can combine options such as temporary programmed withdrawals with a deferred annuity, or purchase an immediate annuity with a portion of their account balance and programmed withdrawals with the rest. Annuities are inflation-indexed. Annuities are not allowed if the pension they generate is smaller than the Basic Solidarity pension, which as of December 2012 was 80,528 Chilean pesos (about $160) monthly. Providers AFPs administer programmed withdrawals. Insurance companies offer annuities to participants. Regulators The Superintendent of Pensions closely supervises and regulates AFPs, including issuing licenses for them to participate in the system. On an ongoing basis, the Superintendent of Pensions monitors plan providers for compliance with investment option guidelines, reserve requirements, fee structure, and other requirements. The Superintendent of Securities and Insurance supervises insurance companies, which includes setting rules and regulations about solvency principles and monitoring insurance company offers through the SCOMP system. Together with the Superintendent of Pensions, it maintains a list of approved pension advisors. Chile (cont.) National pension system Chile has a public pension system with two components targeted at the poorest 60 percent of the population age 65 and over who meet residency requirements. There is a Basic Solidarity Pension for individuals who have not contributed to individual accounts and pass the means test. Participants who contributed to individual accounts and pass the means test receive a Pension Solidarity Complement if their monthly pension is below a threshold amount of 261,758 Chilean pesos (about $520) as of December 2012. At a glance Singapore’s mandatory DC system, the Central Provident Fund (CPF), was established in 1955 to provide financial security for employees in their retirement. The system is comprised of individualized accounts fully funded by both employer and employees. Contributions are distributed among three types of individual accounts. Highlights of the spend-down phase DC Spend-down options Lump sums: At age 55, participants may withdraw their CPF savings in excess of the minimum sum of $148,000 SGD (about $116,535) in their retirement account and a minimum of 40,500 SGD (about $31,900) in their medisave account. Participants who do not have savings in excess of the minimums are allowed to withdraw $5,000 SGD (approximately $3,900). Programmed withdrawals: Prior to 2013, the primary spend-down Total system assets: option was a programmed withdrawal administered by the CPF Board designed to last for approximately 20 years from the participants “drawdown age,” which is currently 65. 230,158 million Singaporean dollars (SGD) (about $181,227 million) as of 2012 Annuities: In 2009, Singapore introduced a new government- with DC accounts: over 80 percent of the resident workforce managed life annuity system, known as CPF Lifelong Income Scheme For the Elderly (CPF LIFE), to address the challenges of increasing life expectancy and an aging population. From 2013, participants with retirement account balances of at least 40,000 SGD (about $31,500) are automatically placed on CPF LIFE. The government offers two types of annuities, a basic and standard plan. The standard plan is the default plan for participants. The standard plan provides higher monthly payments as compared to the basic plan, but leaves a smaller bequest for the participant’s family and other beneficiaries Providers The CPF Board administers the CPF Fund and CPF LIFE. Private insurance companies offer a variety of annuity products in the retail market. Regulators CPF Board was established by law to be the trustee and administrator of CPF. The CPF Board is also responsible for investing participants’ savings in special securities issued by the government. The CPF Board works closely with the Ministry of Manpower and other relevant Ministries on policies that affect CPF, even though the power to initiate changes to provisions on CPF schemes rests with the Minister of Manpower. Singapore (cont.) National pension system CPF is the main source of retirement income for workers. Singapore does not have a multi-pillar retirement system that is similar to the United States. There is no state pay-as-you-go social security. However, there is other government social spending, which indirectly helps retirees with costs, such as health care and housing. Ordinary account savings can be used for housing, investment, and other approved purposes. savings that are set aside for retirement and can be used for investment in retirement- related financial products. help CPF members meet their own or their immediate family’s hospitalization expenses. The retirement account is created for CPF members when they reach the age of 55. Savings from the ordinary and special accounts up to the minimum sum amount (148,000 SGD as of 2013) are transferred to this account. This amount is set aside for the purpose of providing members with steady income post- retirement. At a glance Since 1985, Switzerland has had a mandatory system of occupational pension plans for workers with incomes above a certain threshold. Participants do not have a choice of plan, but are placed in their employer’s plan. The government regulates minimum contributions, and minimum returns on contributions during accumulation. In 2005, Switzerland extended to some extent coverage of the mandatory system to low- income and part-time workers. Highlights of the spend-down phase DC Spend-down options Lump sums: Since 2005, plans have been required to offer participants at least 25 percent of their mandatory retirement savings as a lump sum payment. Programmed withdrawals: Programmed withdrawals are not available to plan participants. Annuities: Switzerland requires that annuities have disability and survivor benefits. Pension funds may voluntarily index annuity payments for inflation if they are financially able to do so. In the mandatory portion of the DC system, annuities must be the default option. Although official data are not available, researchers estimate that participants annuitize about 80 percent of their account balance. Providers Spend-down options may be administered by independent foundations that manage plans. Regulators Oberaufsichtskommission (OAK Commission) came into being in 2012 to harmonize regional oversight and regulation of trust-based pension plans. The Commission also oversees the Guarantee Fund that protects participants in the event that a pension plan becomes insolvent, as well as the Substitute Occupational Benefit Institution that extends coverage to employers who do not meet their pension obligation and those who are not otherwise covered and who wish to voluntarily be covered, such as the self-employed, among others. The Swiss Financial Market Supervisory Authority is responsible for overseeing insurance companies. It ensures that insurance companies comply with laws and regulations and fulfill their licensing requirements. The Federal Social Office of Insurance plans, manages, and monitors Switzerland’s social insurance systems to ensure they function effectively. Switzerland (cont.) Types of plans Mandatory occupational plans are administered by independent foundations and may be sponsored by a single employer or group of employers. Plans are managed by boards of trustees with equal employer and employee representations. The organization of plans also varies by degree of risk coverage with respect to longevity, premature death or disability. National pension system Switzerland’s old age and survivors’ insurance (AHV/AVS) is a pay-as- you-go public pension that is funded by payroll contributions and general revenues. The system has two components: an earnings-related public pension funded mainly by payroll contributions and an income-tested supplementary benefit funded by general revenues. Payment amounts for the earnings-related benefit are based on lifetime earnings. Payments in retirement are indexed to prices and earnings. The government coordinates benefits between AHV/AVS and occupational pension plans so that combined, they replace about 60 percent of preretirement income for workers with lower incomes. Autonomous plans bear all the risk. Autonomous plans with reinsurance hand over some of the risk to a reinsurance company. Semiautonomous plans include two cases: (1) a plan hands over only the risk of death or disability to an insurance company and consequently still bears the risk of longevity; and (2) the plan buys the old-age pensions from the insurer too. Fully insured plans are plans in which all the risks are covered by an insurance company. Savings associations by construction bear no actuarial risks, because only old-age savings are accumulated. At a glance Voluntary DC plans make up most of the private pension system in the U.K. An individual may be a member of a number of different pension plans simultaneously. From October 2012, in a staged process, employers were required to automatically enroll their eligible employees into a pension plan— including a new low-cost DC pension saving plan called the National Employment Savings Trust—and provide a minimum contribution. Highlights of the spend-down phase DC Spend-down options Lump sums: Participants may take up to 25 percent of plan assets as a tax-free lump sum, subject overall to 25 percent of the individual’s lifetime tax-free pension allowance and individual plan rules. Programmed withdrawals: Under a capped drawdown, participants have the ability to withdraw up to 120 percent of the value of an amount equal to what they would get from a level payment single-life annuity each year. Plan administrators are responsible for reviewing these withdrawals every 3 years and resetting maximum withdrawal limits as needed. Participants who can demonstrate secure income of at least £20,000 (about $30,300) per year may take withdrawals without limit while leaving the balance of the fund invested. This option is known as flexible drawdown. Total system assets: £385.9 billion (about $584.70 billion)as of 2010 Annuities: Participants can choose from a number of different options for single-life and joint-life annuities, including level or escalating payments, a guarantee period, and impaired or enhanced rates. Providers Plan administrators, which may be a trustee or for-profit provider, are responsible for administering programmed withdrawals. Private insurance companies offer a variety of annuity products in the retail market. Regulators The Pensions Regulator is solely responsible for regulating trust- based plans and coordinates with other entities on regulating contract- based plans. It has the authority to oversee the administration of these plans and contributions made to them. For DC plans, its statutory objectives include protecting the benefits of plan participants and promoting and improving understanding of good administration of plans. The Financial Conduct Authority and the Prudential Regulatory Authority share oversight of contract-based plans and financial service providers. The Financial Conduct Authority was the successor to the former authority, the Financial Services Authority. It is responsible for ensuring fairness in the conduct of firms towards their employers and participants, overseeing smaller insurers, as well as asset managers and advisory firms. The Prudential Regulatory Authority, the smaller of the two new entities, is a subsidiary of the Bank of England and tasked with monitoring the safety and soundness of larger insurers, banks, and asset managers. United Kingdom (cont.) Types of plans Trust-based plans (41 percent of DC participants) These are employer-sponsored plans and are usually non-profit entities managed by a board of trustees that typically hire one or more service providers for recordkeeping and investment management services. The trustees are responsible for selecting and vetting the investment options. National pension system The basic state pension is a regular payment from the government once an individual reaches a certain age. The amount depends on the number of qualifying years, at least 30 years, individuals have of National Insurance contributions. As of August 2013, the maximum full basic state pension is £110.15 per week (about $167). There is also an earnings- related element called the state second pension, which give individuals who earn lower wages or cannot work as much as other the change to receive a better state pension. Under current rules, there is a cap on annual earnings of £40,040 (about $60,700). In addition to the contact named above, Tamara Cross (Assistant Director), Tom Moscovitch, Lacy Vong, and Seyda Wentworth made key contributions to this report. Also contributing to this report were James Bennett, Alicia Cackley, Grace Cho, Katie Delgado, Holly Dye, Kathy Leslie, Sheila McCoy, Ernest Powell, Jr., Stephen Sanford, MaryLynn Sergent, Jessica Smith, Roger Thomas, Frank Todisco, Kathleen van Gelder, and Walter Vance. Insurance Markets: Impacts of and Regulatory Response to the 2007- 2009 Financial Crisis, GAO-13-583. Washington: D.C.: June 27, 2013. 401(k) Plans: Labor and IRS Could Improve the Rollover Process for Participants, GAO-13-30. Washington, D.C.: March 7, 2013. Retirement Security: Annuities with Guaranteed Lifetime Withdrawals Have Both Benefits and Risks, but Regulation Varies across States, GAO-13-75. Washington, D.C.: December 10, 2012. 401(k) Plans: Increased Educational Outreach and Broader Oversight May Help Reduce Plan Fees, GAO-12-325. Washington, D.C.: April 24, 2012. Defined Contribution Plans: Approaches in Other Countries Offer Beneficial Strategies in Several Areas, GAO-12-328. Washington, D.C.: March 22, 2012. GAO, Retirement Income: Ensuring Income throughout Retirement Requires Difficult Choices, GAO-11-400. Washington, D.C.: June 7, 2011. 401(k) Plans: Improved Regulation Could Better Protect Participants from Conflicts of Interest, GAO-11-119. Washington, D.C.: January 28, 2011. Private Pensions: Changes Needed to Better Protect Multiemployer Pension Benefits, GAO-11-79. Washington, D.C.: October 18, 2010. Retirement Savings: Automatic Enrollment Shows Promise for Some Workers, but Proposals to Broaden Retirement Savings for Other Workers Could Face Challenges, GAO-10-31. Washington, D.C.: October 23, 2009. 401(k) Plans: Policy Changes Could Reduce the Long-term Effects of Leakage on Workers’ Retirement Savings, GAO-09-715. Washington, D.C.: August 28, 2009. Private Pensions: Alternative Approaches Could Address Retirement Risks Faced by Workers but Pose Trade-offs, GAO-09-642. Washington, D.C.: July 24, 2009.
American workers are primarily saving for retirement through their 401(k) plans and will likely need assistance making complicated decisions about how to spend their money throughout retirement. Other countries with defined contribution (DC) systems are also dealing with this spend-down challenge. To identify lessons for the U.S. from the experiences of other countries, GAO examined selected countries' (1) approaches to offering retirement spend-down options; (2) strategies to help participants make sound decisions; and (3) approaches to regulating and overseeing options. An initial review of countries with established DC systems indicated that some countries including the six GAO selected--Australia, Canada, Chile, Singapore, Switzerland, and the United Kingdom--have developed innovative spend-down policies that have the potential to yield useful lessons for the U.S. experience. GAO reviewed reports on DC plans; and interviewed experts and government officials in the U.S. and selected countries. The six countries GAO reviewed can offer U.S. regulators lessons on how to expand access to a mix of spend-down options for 401(k) participants that meet various retirement needs. Five of the six countries generally ensure that participants can choose among three main plan options: a lump sum payment, a programmed withdrawal of participants' savings, or an annuity. In the last several decades, all the countries took steps to increase participant access to multiple spend-down options, with some first conducting reviews of participants' retirement needs that resulted in policy changes, as shown below. In the United States, 401(k) plans typically offer only lump sums, leaving some participants at risk of outliving their savings. The U.S. Departments of Labor (DOL) and the Treasury (Treasury) have begun to explore the possibility of expanding options for participants, but have not yet helped plan sponsors address key challenges to offering a mix of options through their plan. Countries reviewed used various strategies to increase participants' knowledge and understanding of spend-down options, which may be useful to DOL in its ongoing efforts. Strategies used by other countries include (1) communicating spend-down options to participants in an understandable and timely manner, and (2) helping participants see how their savings would translate into a stream of income in retirement by providing them with projections of retirement income in their annual benefit statements. Currently, 401(k) participants have difficulty predicting how long their savings will last because most benefit statements do not focus on the stream of income it can generate. DOL is currently considering including income projections in statements, which may help participants better understand what their balance could provide on a monthly basis once they retire. Regulators in the countries GAO reviewed employed several approaches to overseeing the spend-down phase aimed at helping participants sustain an income throughout retirement. For example, most of the countries used withdrawal rules and restrictions for lump sums and programmed withdrawals to help protect participants from outliving their savings. With respect to annuities, DOL continues to consider current regulatory barriers that may prevent 401(k) plan sponsors from offering annuities, which do not exist in other countries. Looking at what other countries require may help DOL in its efforts. GAO recommends that DOL and Treasury, as part of their ongoing efforts, consider other countries' approaches in helping 401(k) plan sponsors expand access to a mix of spend-down options for participants. GAO also recommends that DOL consider other countries' approaches in providing information about options and regulating the selection of annuities within DC plans. In response, DOL generally agreed with GAO's recommendations and will evaluate approaches.
The problems in the D.C. public school system have persisted for years despite numerous efforts at reform. In 1989, a report by the D.C. Committee on Public Education noted declining achievement levels as students move through grades, the poor condition of the school system’s physical facilities, and the lack of accountability among D.C. agencies for the schools. Recent reports have continued to cite these problems. In 2004, the Council of the Great City Schools reviewed the D.C. school system and cited the continued failure to improve students’ academic performance. In 2006, an analysis of DCPS reform efforts by a consulting firm found no progress and recommended a change in governance to improve student achievement and systemwide accountability. In response to these problems, the D.C. Council (the legislative branch of the D.C. government) approved the 2007 Reform Act, which significantly altered the governance of the D.C. public schools. The Reform Act transferred the day-to-day management of the public schools from the Board of Education to the Mayor and placed DCPS under the Mayor’s office as a cabinet-level agency. Prior to the Reform Act, the head of DCPS reported to the Board of Education. The Reform Act also moved the state functions into a new state superintendent’s office, moved the facilities office out of DCPS, and created a D.C. Department of Education headed by the Deputy Mayor for Education. (See fig. 1.) DCPS: DCPS functions as a traditional local educational agency, or school district. The head of DCPS, the Chancellor, is appointed by the Mayor, confirmed by the D.C. Council, and serves at the Mayor’s discretion. The Chancellor sets the academic priorities and the curriculum for public schools, and works with schools in need of improvement under the No Child Left Behind Act (NCLBA). School districts have the primary responsibility for ensuring that underperforming schools receive technical assistance, as required by NCLBA. Department of Education: The new D.C. Department of Education is headed by the Deputy Major for Education and oversees the state superintendent’s office, facilities office, and the ombudsman’s office. The department is responsible for planning, coordinating, and supervising all public education and education-related activities that are under the purview of these three offices. It also acts as chief advisor to the Mayor for broad, high-level education strategies that involve more than one District education office and has responsibility for bringing together key players to determine who should take the lead on specific initiatives. In addition, the Deputy Mayor coordinates the work, direction, and agenda of the Interagency Collaboration and Services Integration Commission (Interagency Commission), which serves as a high-level policy making body that coordinates meetings with directors from children and youth- serving agencies. According to the Deputy Mayor, the purpose of the Interagency Commission is to build consensus and set priorities for how to best address the needs of District children and youth. Office of the State Superintendent of Education: The state superintendent’s office is responsible for functions traditionally handled by a state educational agency. It develops academic standards, helps develop teacher licensing requirements, and administers funds for federal and District education programs. The State Superintendent is also responsible for developing comprehensive assessments, or tests, and ensuring that DCPS meets federal requirements for elementary and secondary education under NCLBA. The office also oversees, among other functions, those related to early childhood education programs and adult education and literacy. State Board of Education: While the Board of Education—renamed the State Board of Education—no longer has responsibility for day-to-day operations of the public schools, it is responsible for approving the District’s academic standards, high-school graduation requirements, and other educational standards. It is required to advise the State Superintendent on policies related to the governing of vocational and charter schools and proposed education regulations. Five of the nine State Board of Education members are elected and four are appointed by the Mayor and confirmed by the D.C. Council. Office of Public Education Facilities Modernization (facilities office): The Reform Act not only moved the facilities office out of DCPS but gave the new office independent procurement and personnel authority. These functions were formerly performed by separate divisions within DCPS not directly accountable to or managed by the DCPS facilities office. The new facilities office is responsible for modernization and maintenance of D.C. public schools. DCPS retains oversight of the janitorial services of individual schools. The Reform Act also gave the D.C. Council an expanded role in overseeing some aspects of D.C. public school management. For example, the Mayor is required to submit proposed DCPS rules and regulations to the Council for review. In addition, the Council has gained new powers over the DCPS budget. The Mayor submits the budget for Council review and the Council may modify the funding allocated to individual schools. Previously, the Council only had authority to approve or disapprove the budget. The early efforts to improve D.C. public schools have focused largely on broad management reforms and other activities that lay the foundation for long-term improvements, such as developing new data systems, a school consolidation plan, academic priorities, and improving school facilities. Management reforms included the transfer of many functions from DCPS to the new offices of state superintendent and facilities. According to District officials, moving state-level education and facility functions out of DCPS should give the Chancellor more time to focus on issues that directly affect student achievement. Furthermore, moving state functions out of DCPS is intended to allow more effective oversight of the District’s education programs. The management reforms also included specific human capital initiatives, such as new central office personnel rules and new systems for evaluating central office and state employee performance that are designed to improve office efficiency. District education offices also have begun to lay a foundation for long-term improvements to student and personnel data systems and management of building maintenance. As required by the Reform Act, state-level education functions previously performed by DCPS were transferred to the new office of the state superintendent. This office developed a transition plan, as required by the Reform Act, which detailed the transfer of authority and restructuring of key staff functions and budgets. On October 1, 2007, over 100 staff, functions, and associated funds were transferred to the office of the state superintendent. Staff who spent at least half their time working on state- level functions, such as administering funds for federal and state education programs, became employees of the state superintendent’s office. The Reform Act moved state functions out of DCPS, in large part, to provide for independent oversight. Prior to the Reform Act, there was no clear separation of funding, reporting, and staffing between local and state functions within DCPS. For example, staff who monitored federal grant programs reported to the same person as staff who implemented those programs. As a result of the Reform Act, staff who perform state-related functions, such as monitoring federal programs, report to the State Superintendent whereas staff who implement the programs report to the DCPS Chancellor. The transition plan also laid out immediate and long-term priorities, such as federal grants management reform and improved teacher quality. To improve federal grants management, the State Superintendent has established priorities and begun to address long-term deficiencies identified by the U.S. Department of Education (Education) related to federal program administration, including compliance with NCLBA. Specifically, the State Superintendent has established a direct line of accountability by having the director of federal grants report directly to her and serve on her leadership team. In addition, to meet NCLBA requirements, the State Superintendent is in the process of establishing a statewide system of support that will provide technical assistance to underperforming schools. The State Superintendent has stated that establishing this process is challenging, given that 75 percent of D. C. schools have been identified as needing improvement under NCLBA. The district also ranks as one of the lowest school districts for having qualified teachers, with only 55 percent of core classes taught by teachers that meet NCLBA requirements for highly qualified. The transition plan identified teacher quality as a priority area, but does not outline measurable goals for increasing the number of highly qualified teachers. According to the State Superintendent, the office has started to develop a strategic plan that will provide more specifics on its goals and objectives. Specifically, this plan would include measurable goals such as increasing the number of highly qualified teachers. According to the state superintendent’s office, this strategic planning effort will be completed in mid-summer 2008.The state superintendent’s office also plans to revise the District’s “highly qualified teacher” definition under NCLBA and is also considering revisions to how the District certifies teachers to align to the revised definition. The Reform Act also created a new facilities office to improve the conditions of DCPS school facilities. Unlike state-level functions, DCPS facilities staff and functions have not yet formally transferred to the new facilities office. Although the new office took over responsibility for modernization of school facilities (i.e., major renovations or new construction) and facility maintenance in the summer of 2007, functions and staff will not be formally transferred until the facility budget is “reprogrammed” and moved. In addition, the office will oversee general contractors who are hired for major construction projects such as the building of new schools. The director of the facilities office told us about 400 staff (building engineers, painters, and general maintenance workers) will transfer to his office. The District’s broad management reforms also included an emphasis on human capital initiatives, particularly efforts to hold employees accountable for their work. Both the State Superintendent and the DCPS Chancellor include new individual performance evaluations as part of their efforts to develop high-performing organizations. Previously, performance evaluations were not conducted for most DCPS staff, including those who moved to the state superintendent’s office. DCPS officials told us that all staff had received performance evaluations as of January 2008. These evaluation forms were based on District government-wide competencies, such as maintaining and demonstrating high-quality and timely customer service and using resources effectively. DCPS officials told us that these evaluations do not yet link to their offices’ performance goals because they had limited time to implement the new performance system. However, they stated that they plan to develop the linkages over the next year. Officials at the state superintendent’s office told us that performance measurement plans have been developed for all staff and performance evaluations based on those plans will begin in late March 2008. The State Superintendent has required each staff member to develop an individual plan that includes specific goals that are linked to the office’s overall goals as outlined in the office performance plan. The facilities office intends to create and sustain a culture of high performance and accountability by implementing a performance management system that will hold employees accountable for their work and establish a performance feedback process that ensures “a dialogue between supervisors, managers, and employees throughout the year.” Linking individual performance evaluations to organizational goals is an important step in building a high-performing organization. As we noted in a previous report, organizations use their performance management systems to support their strategic goals by helping individuals see the connection between their daily activities and organizational goals. Other human capital initiatives included the Chancellor’s effort to improve the capacity of the central office by terminating central office employees who were assessed as not meeting expectations on their performance evaluations and replacing them with staff who have the requisite skills. Specifically, the Chancellor told us she needs staff who are capable of providing critical central office services, so that, for example, teachers are paid and textbooks delivered on time. Several principals we spoke with told us that school staff have spent considerable time on repeatedly calling the central office for support or supplies, time that could otherwise be spent on instruction. In January 2008, the D.C. Council passed the Public Education Personnel Reform Amendment Act of 2008, submitted by the Chancellor and the Mayor, which gave the Mayor greater authority to terminate certain staff within DCPS’ central office, including non-union staff and staff hired after 1980. According to the Chancellor, this legislation ultimately will allow her to begin building a workforce that has the qualifications needed for a high-functioning central office. Both the state superintendent’s office and DCPS are working to improve their data systems to better track and monitor the performance of students, teachers, and schools. The superintendent’s office is in the process of selecting a contractor to build a longitudinal database that will store current and historical data on students, teachers, and schools. Currently, there is no one system that tracks the movement of students among District schools. The new database is being designed to standardize how data are collected from DCPS and charter schools and to track student data, such as attendance and test scores across multiple years. According to the state superintendent’s office, this database will help stakeholders identify which schools and teachers are improving student achievement and determine what instructional approaches work best for which types of students. Education awarded the state superintendent’s office a 3-year grant totaling nearly $6 million to help fund this effort. The database is expected to be fully operational by 2012. DCPS is also focused on improving the quality of student data, some of which will be inputted into the state longitudinal database. Currently, DCPS student data are not consistently reported throughout the numerous data systems. In addition, the multiple systems often have contradictory information. For example, the Chancellor told us that one system showed there were 5,000 special education students in the District while another showed 10,000. To address these problems, DCPS told us that they are consolidating its data systems, eliminating duplicate information, and verifying data accuracy. DCPS officials told us they expect the new student data management system to be operational by February 2009. In addition to student data systems, DCPS has also taken steps to change and improve its personnel data systems by moving from a paper-based to an electronic system. DCPS scanned millions of personnel files into an electronic data system. According to agency officials, this was necessary because the files that existed were in unorganized stacks in office closets and not securely maintained. DCPS officials told us that they had scanned nearly 5 million documents. The scanning revealed missing personnel records for some staff members and, in other cases, job descriptions that did not match the jobs staff were actually performing. In addition, the D.C. Office of the Inspector General is currently conducting an audit of the DCPS payroll system, to be released in the summer of 2008, to verify that every individual who receives a paycheck from DCPS is currently employed with the school system. In February 2008, DCPS completed its preliminary school consolidation (closing) plan that identified over 20 schools for closure over the next several years in an effort to provide more resources to the remaining schools. Plans to consolidate D.C. public schools have been underway in recent years and Congress has raised concerns about the inefficiency of maintaining millions of square feet of underutilized or unused space in DCPS facilities. (DCPS is currently operating at approximately 330 square feet per student, while the national average is 150 square feet.) According to DCPS officials, the cost of administration, staff, and facilities in underutilized schools diverts resources from academic programs for all students. However, it is unclear how much long-term savings, if any, will result from these closings. DCPS officials told us that they are currently working with the facilities office and the District Office of the Chief Financial Officer (OCFO) to develop long-term cost estimates. In addition, some parents, community groups, and the D.C. Council disagreed with the process the Chancellor and Mayor used to develop the plan. The D.C. Council expressed concern that the Mayor and Chancellor did not present the proposal to the Council before it was made public, and some community members met to express their opposition to the closings. The Chancellor provided a detailed report of the criteria used to select schools for closure and held community meetings. Based on input from parents and the community, the Chancellor revised the list of schools to be closed. The consolidation plan was finalized in March 2008. In the area of academic achievement, DCPS has set academic priorities for the 2007-2008 school year and is in the process of establishing longer-term priorities. The Chancellor told us that the academic priorities will build on DCPS’ 2006 Master Education Plan, which established key strategies and goals to direct instruction within DCPS. The Chancellor noted, however, that the 2006 plan cited copious goals and objectives without prioritizing and establishing explicit time frames or clear strategies for how DCPS would meet the goals. In November 2007, DCPS laid out its 2007-2008 academic priorities, which included key objectives and strategies that focus on improving student achievement, school facilities, parental and community involvement, and central office operations. For example, under its objective to improve student achievement, DCPS identified, as a major initiative, efforts to recruit and hire high-quality principals for roughly one-third of its schools. According to the Chancellor, getting high- quality principals to serve as instructional leaders is a key step to improving the quality of teachers and classroom instruction. DCPS has launched a national recruitment strategy and plans to select candidates by the end of the 2007-2008 school year. The Chancellor is also focusing on longer-term priorities, such as developing a districtwide curriculum aligned to academic standards and assessments, and providing teachers with professional development on instructional strategies for the curriculum. DCPS is currently working on a five year academic plan that is to be completed by March 2008. (See table 1 for key initiative and completion dates.) The facilities office has worked since the summer of 2007 to address the backlog of repairs the office inherited from DCPS. The director of the office told us that he found that school heating and plumbing systems were inoperable, roofs leaked, and floors needed replacing. In addition, he told us that many schools were in violation of District fire codes with exit doors locked from the inside for security. The director of the facilities office also told us that when his office took responsibility for school maintenance, he found thousands of work orders that had been submitted to address these building deficiencies that had not been closed. In some cases the repairs were completed but the work order was not closed; however, in many cases, the work orders were several years old and the repairs had not been completed. In addition, the facilities director found that most of the work orders did not adequately reflect the scope of the work needed, and the cost of the repairs was underestimated. For example, he told us that a work order may request repairs related to the symptom rather than the cause of the problem, such as painting over a water stain in the ceiling rather than fixing the more expensive plumbing problem. To address the backlog and ongoing facilities needs, the new office undertook several programs this summer and early fall. Repairs were made to over 70 schools that were not slated to undergo modernization for years. According to facilities officials, needed painting, plumbing, electrical, and other work were done at each of the schools. In addition, systems were assessed at all District schools for heat and air conditioning repairs. According to the facilities director, all schools with central air conditioning received upgrades and about 670 new air conditioning units were installed. The office found, however, that about 1,000 to 1,500 classrooms did not have air conditioning. To ensure classrooms have air conditioning by spring 2008, the facilities office is planning to upgrade electrical systems to allow installation of new cooling units. According to the director, the office has also made repairs to school heating systems and all schools had heat by October 15, 2007. He noted that many of the heating repairs could have been avoided if the heating systems had received adequate maintenance. The office found many schools where boilers installed only three to four years ago were inoperable due to poor maintenance. The office also started a “stabilization” program in the fall of 2007, to make improvements to the remaining 70 or so schools. About $120 million is budgeted to correct possible fire code violations and make plumbing, roofing, and other repairs. According to the facilities director, the work order backlog should be largely eliminated by these maintenance and modernization efforts. Furthermore, a facility official told us that they are prioritizing work order requests by the urgency of the request, that is, whether it is a hazard to students or a routine repair. According to this official, emergency repairs are addressed the day, or the day after, the work order is submitted. Routine repairs and maintenance, such as plumbing and painting, are addressed by the in-house trades (painters, plumbers) while more complicated repairs are addressed by contractors that have been “pre- qualified” by the facility office. Contracts for major repairs, such as replacing an entire roof, are put out for competitive bid. Finally, District officials told us that the facilities office is in the process of revising the DCPS 2006 Master Facilities Plan, which outlined how DCPS planned to use and improve school buildings, offices and other facilities over a 15 year period. According to District officials, the revised plan will align with the Chancellor’s academic priorities and school consolidation efforts. The Master Facilities Plan was due on October 1, 2007, but the facilities director was granted an extension until May 31, 2008. The Mayor and education officials have introduced a performance-based process designed to establish accountability for their school reform efforts. This process includes weekly meetings to track progress and accomplishments across education offices and annual performance plans for these offices, including the D.C. Department of Education’s plan. According to recent studies of the D.C. school system, little was done in the past to hold offices and education leaders accountable for progress. Weekly meetings are a key component of the District’s performance-based process and, according to the Deputy Mayor for Education, integral to how the Mayor and D.C. education offices monitor the progress of reform efforts. The Mayor’s meetings, known as CapStat meetings, are used to track progress and accomplishments across all D.C. government offices. Every 3 months, the City Administrator’s office develops a list of topics for possible discussion at CapStat meetings based, in part, on a review of each office’s performance plan. According to city officials, issues for CapStat meetings typically concern agencies having difficulty meeting their specific performance targets. These issues are given to the Mayor who then selects which ones will be discussed. The Mayor may also identify other issues that have emerged as immediate concerns, for example, those related to the safety and health of D.C. residents. At the CapStat meeting, cognizant managers provide status updates using performance data. The Mayor then assigns follow-up tasks to particular managers with agreed-upon timeframes. The Mayor reviews whether follow-up tasks have been completed. This tracking provides the basis for the Mayor’s office to monitor progress, and, if inadequate, determine what further action is needed. For example, during the summer of 2007, a CapStat meeting focused on school facilities. The data indicated that many of the schools’ heating systems were not functioning. The Mayor’s office asked the director of the facilities office to develop a plan within 2 weeks to ensure that all schools had functional heating systems by mid-October. Officials told us the Mayor’s office tracked the submission of the plan and the heating system work. As previously mentioned, District officials reported that all schools had heat by October 15. The Chancellor and the State Superintendent adopted processes similar to CapStat—SchoolStat and EdStat, respectively—to hold managers accountable for their offices’ performance (see table 2 for information on the three “Stat” meetings). The Chancellor uses weekly SchoolStat meetings to discuss high-priority issues and what actions DCPS department managers need to take to improve performance. Similarly, the state superintendent’s office uses weekly EdStat meetings to monitor progress in administration of federal grants and special education services. At EdStat meetings, managers analyze performance data, collaborate with program managers on remediation strategies, and monitor subsequent performance data to validate the effectiveness of actions taken. The State Superintendent plans to use EdStat meetings to monitor whether the office is meeting time frames for providing assistance to schools identified as in need of improvement under NCLBA. In addition to weekly meetings, the Mayor’s office requires education offices to develop and follow annual performance plans as another component of the accountability process. These performance plans include broad objectives, such as increasing student achievement, assessing the effectiveness of educational programs, and coordinating services with city agencies. In addition, the plans detail specific actions to achieve these objectives, and key performance indicators designed to measure progress. For example, regarding DCPS’ 2007-2008 performance plan objective to increase student achievement, DCPS plans to provide training for teachers to help them make better use of student performance data. Similarly, regarding the State Superintendent’s objective to provide educators with information needed to improve schools and to assess the effectiveness of educational programs, the office plans to provide data from its longitudinal database to educators to help them determine where specialized programs are needed. The first performance plan for the facilities office is scheduled to be in place in November 2008. The D.C. Department of Education has taken some steps to coordinate and integrate the various efforts of the District’s education offices. The Deputy Mayor for Education told us that the department reviews the individual annual performance plans of education offices to ensure they are aligned and not working at cross-purposes. The department also uses CapStat meetings to monitor the progress of the education offices. In addition, according to the Deputy Mayor for Education, the department tracks the goals and activities of city youth agencies, such as the Child and Family Services Agency, to ensure they are consistent with the goals of the education offices. D.C. Department of Education officials also told us they will take additional steps in the future. The Deputy Mayor will review each education office’s long-term plan, such as the Chancellor’s five year academic plan and the revised Master Facilities Plan, to ensure they are coordinated and implemented. The Deputy Mayor also told us that the department will rely on findings from annual evaluations of DCPS to assess the progress of the reform efforts. Officials with the D.C. Department of Education told us they have not yet developed a documented districtwide education strategic plan. According to department officials, they do not intend to develop a written plan at this time, in part, because they are addressing immediate and urgent issues. They questioned the need for a written document as opposed to a formalized process that would help ensure that the individual District education offices’ long-term plans are coordinated and executed. While developing a long-term strategic plan takes time, it is useful for entities undergoing a major transformation, such as the D.C. public school system. The District has a new public school governance structure and newly created education offices. A strategic plan, and the process of developing one, helps organizations look across the goals of multiple offices and determine whether they are aligned and connected or working at cross-purposes. By articulating an overall mission or vision, a strategic plan helps organizations set priorities, implementation strategies, and timelines to measure progress of multiple offices. A long-term strategic plan is also an important communication tool, articulating a consistent set of goals and marking progress for employees and key stakeholders, from legislative bodies to community organizations. The problems in the D.C. public school system are long-standing. Past efforts to reform the system and ultimately raise student achievement have been unsuccessful. The Reform Act made many changes: new divisions of responsibility, improved oversight, and greater opportunity for the Chancellor to focus on academic progress. The Mayor and his education team recognized that before they could take full advantage of these changes, they would have to revamp the school system’s basic infrastructure. Their initial efforts, including those to create a highly functional central office and repair school buildings to make them safe for students, provide some of the basics for successful learning environments. However, the Mayor and his team will need to sustain the momentum created over the last 6 months and focus as quickly as possible on the challenges that lie ahead—improving the reading and math skills of students and the instructional skills of teachers. In addition, the Mayor and his team have taken steps to hold managers and staff accountable for improving the school system, such as holding weekly performance meetings, developing annual performance plans, and coordinating education activities. These changes form the cornerstone of the Mayor’s effort to transform the organizational culture of the District’s public education system. However, the Mayor’s team has not yet developed a long-term districtwide strategic education plan. Given the significant transformation underway, a strategic plan could provide a framework for coordinating the work of the education offices and assessing short-term and long-term progress. Without a plan that sets priorities, implementation goals, and timelines, it may be difficult to measure progress over time and determine if the District is truly achieving success. Additionally, a districtwide strategic education plan would increase the likelihood that the District’s education offices work in unison toward common goals and that resources are focused on key priorities, not non-critical activities. A strategic plan could also help determine when mid-course corrections are needed. Given that leadership changes, a strategic education plan would provide a road map for future district leaders by explaining the steps taken, or not taken, and why. To help ensure the long-term success of the District’s transformation of its public school system, we recommend that the Mayor direct the D.C. Department of Education to develop a long-term districtwide education strategic plan. The strategic plan should include certain key elements including a mission or vision statement, long-term goals and priorities, and approaches and time frames for assessing progress and achieving goals. It may also include a description of the relationship between the long-term strategic and annual performance goals. In addition, the strategic plan should describe how coordination is to occur among the District’s education offices. As you know Mr. Chairman, you have requested that we conduct a second, longer-term study of changes in D.C. schools’ management and operations, and results of these changes. We will begin that study this month. We provided a draft of this report to the offices of the Mayor and District education officials for review and comment, and on March 11, 2008, officials from the Mayor’s office discussed their comments with us. They told us they support the need for an overarching strategy that integrates the efforts and plans of DCPS, the state superintendent’s office, and the facilities office. They said that these offices are in the process of developing long-term strategic plans to serve as the foundation for an overall education strategy, and that the Deputy Mayor for Education is committed to coordinating and sustaining these efforts. Further, they noted that a districtwide strategy can take many forms, and that the Deputy Mayor’s preferred approach is to develop a formal process, rather than a written document, to ensure efforts are coordinated and executed as efficiently as possible. They noted that in the past, plans were written, “put on a shelf,” and never used. We agree that the Deputy Mayor is taking steps to coordinate the individual plans of these offices, and that the Mayor’s education team recognizes the importance of taking a strategic approach to address the educational needs of District students. However, as we have said in this statement, we see value in developing a documented strategy that could help the District’s education leaders coordinate their efforts and goals, and provide future leaders the benefit of understanding what worked, what didn’t, and why. While past administrations may have developed strategic plans and not used them, what is unknown is whether these plans could have been of value if they had been used. The current administration’s development and implementation of an articulated documented strategy could provide a foundation that would help coordinate future efforts. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Harriet Ganson, Elizabeth Morrison, Sheranda Campbell, Jeff Miller, Bryon Gordon, Susan Aschoff, Sheila McCoy, Sandy Silzer, Sarah Veale, Janice Latimer, and Terry Dorn. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In response to long-standing problems with student academic performance, the condition of school facilities, and the overall management of the D.C. public school system, the D.C. Council approved the Public Education Reform Amendment Act of 2007 (Reform Act). The Reform Act made major changes to the operations and governance of the D.C. public school system, including giving the Mayor authority over public schools, including curricula, personnel, and school facilities. While other large urban school districts have transferred governance of schools to their mayors, D.C. is unique because it functions as both local and state offices for many education responsibilities. GAO's testimony focuses on (1) the status of the District's efforts to reform its public school system, and (2) what the District has done to establish accountability for these efforts. To address these issues GAO reviewed documents, interviewed District education officials and interviewed principals from nine D.C. public schools. The early efforts to improve D.C. public schools have focused largely on broad management reforms and other activities that lay the foundation for long-term improvements to the D.C. public school system. The broad management reforms included the transfer of many functions from D.C. public schools (DCPS) into the new office of the state superintendent, which could allow for more effective oversight of the District's education programs. Prior to the Reform Act, there was no clear separation of funding, reporting, and staffing between local and state functions. A new facilities office was also created to improve the conditions of DCPS school facilities. Moving state-level education and facilities functions out of DCPS is intended to give the head of DCPS, called the Chancellor, more time to focus on issues that directly affect student achievement. The management reforms also included specific human capital initiatives such as new DCPS central office personnel rules and new systems for evaluating central office and state-level employee performance. In addition, both the State Superintendent and the Chancellor are working to improve their data systems to better track and monitor the performance of students, teachers, and schools. DCPS also completed its school consolidation plan that identified over 20 schools for closure over the next several years. In addition, the school facilities office is working to address the backlog of repairs. The director of the facilities office told us that he found that school heating and plumbing systems were inoperable, roofs leaked, and floors needed replacing. In addition, he said many schools were in violation of District fire codes. To address the backlog and ongoing facilities needs, the new office undertook several repair programs this summer and early fall. The D.C. Mayor and education officials have introduced a performance-based process designed to establish accountability for their school reform efforts. This process includes weekly meetings to track progress and accomplishments across education offices. In addition, the Mayor's office required agencies to develop and follow annual performance plans. D.C. Department of Education officials told us that they review the individual performance plans of District education offices, such as DCPS and the state superintendent's office, to ensure they are aligned and not working at cross-purposes. However, the department has yet to develop a long-term districtwide education strategy that could integrate the work of these offices, even though it included the development of such a strategy in its 2007-2008 performance plan. While developing a strategic plan takes time, it is useful for entities undergoing a major transformation, such as the D.C. public school system. A strategic plan helps organizations look across the goals of multiple offices and identify if they are aligned and connected or working at cross-purposes. Without a plan that sets priorities over time, implementation goals, and timelines, it may be difficult to measure progress over time and determine if the District is truly achieving success. In addition, given that leadership changes, a strategic plan would provide a road map for future District leaders by explaining the steps taken, or not taken, and why.
MDA’s BMDS is being designed to counter ballistic missiles of all ranges—short, medium, intermediate, and intercontinental.ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is developing multiple systems that when integrated, provide multiple opportunities to destroy ballistic missiles before they can reach their targets. The system includes space-based sensors as well as ground- and sea-based radars, ground- and sea-based interceptor missiles, and a command and control, battle management, and communications system providing the warfighter with the necessary communication links to the sensors and interceptor missiles. A typical engagement scenario to defend against an intercontinental ballistic missile would occur as follows: Since Infrared sensors aboard early-warning satellites detect the hot plume of a missile launch and alert the command authority of a possible attack. Upon receiving the cue, land- or sea-based radars are directed to track the various objects released from the missile and, if so designed, to identify the warhead from among spent rocket motors, decoys, and debris. When the trajectory of the missile’s warhead has been adequately established, an interceptor—consisting of a kill vehicle mounted atop a booster—is launched to engage the threat. The interceptor boosts itself toward a predicted intercept point and releases the kill vehicle. The kill vehicle uses its onboard sensors and divert thrusters to detect, identify, and steer itself into the warhead. With a combined closing speed of up to 10 kilometers per second (22,000 miles per hour), the warhead is destroyed above the atmosphere through a “hit to kill” collision with the kill vehicle. Inside the atmosphere, interceptors kill the ballistic missile using a range of mechanisms such as direct collision between the interceptor missile and the inbound ballistic missile or killing it with the combined effects of a blast fragmentation warhead (heat, pressure, and grains/shrapnel) in cases where a direct hit does not occur. Table 1 provides a brief description of eight BMDS elements and supporting efforts currently under development by MDA. In 2009, DOD altered its approach to European defense, which originally focused on ground-based interceptors from the GMD element and a large fixed radar as well as transportable X-Band radars, in order to provide defenses against long-range threats to the United States and short-, medium-, and intermediate-range Iranian threats to Europe. This new approach, referred to as the European Phased Adaptive Approach (PAA), consists primarily of Aegis BMD sea-based and land-based systems and interceptors, as well as various sensors to be deployed over time as the various capabilities are matured. The European PAA policy announced by the President articulates a schedule for delivering four phases of capability to defend Europe and augment current protection of the U.S. homeland in the following time frames: Phase 1 in 2011, Phase 2 in 2015, Phase 3 in 2018, and Phase 4 in 2020. DOD’s schedule for the European PAA comprises multiple elements and interceptors to provide an increasingly integrated ballistic missile defense capability. It is projected that each successive phase will deliver additional capability with respect to both threat missile range and raid size. Table 2 outlines the plans and estimated delivery time frames associated with each European PAA phase. MDA experienced mixed results in executing its fiscal year 2011 development goals and BMDS tests. For the first time in 5 years, we are able to report that all of the targets used in fiscal year 2011 test events were delivered as planned and performed as expected. Moreover, the Aegis BMD program demonstrated the capability to intercept an intermediate-range target for the first time. Also, the THAAD program successfully conducted its first operational flight test in October 2011. However, none of the programs we assessed were able to fully accomplish their asset delivery and capability goals for the year. At the same time, several critical test failures as well as a test anomaly and delays disrupted MDA’s flight test plan and the acquisition strategies of several components. Overall, flight test failures and an anomaly forced MDA to suspend or slow production of three out of four interceptors currently being manufactured. The GMD program, in particular, has been disrupted by two recent failures, which forced MDA to halt flight testing and restructure its multi-year flight test program, halt production of the interceptors, and redirect resources to return-to-flight activities. Production issues forced MDA to slow production of the THAAD interceptors, the fourth missile being manufactured. Table 3 presents a summary of selected MDA goals for fiscal year 2011 that details how well these goals were accomplished. Appendixes IV through XI further detail MDA’s progress in each of the major programs. Highlights of progress and challenges this year include the following: Targets: In prior years, we reported that problems with availability and reliability of targets had caused delays in MDA’s test program; however, in fiscal year 2011, MDA delivered 11 short- or intermediate- range targets, and all performed successfully. The targets launched during the year supported tests of several different BMDS elements, including Aegis BMD, GMD, and Patriot systems without causing major delays or failures in flight tests. Among these successful flights was FTX-17, the return-to-flight of MDA’s short-range air-launched target in July 2011. This was the target’s first launch since an essential mechanism that releases it from the aircraft failed in a December 2009 THAAD flight test. After the failure, the agency identified shortcomings in the contractor’s internal processes that had to be fixed before air-launched targets could be used again in BMDS flight tests. Nineteen months later, these deficiencies appeared to be overcome when the target missile was successfully air-launched in FTX-17. To reduce risk, the flight was not planned as an intercept mission but as a target of opportunity for several emerging missile defense technologies including the Space Tracking Surveillance System. Aegis BMD: In April 2011, the Aegis BMD program demonstrated capability for the first time to intercept an intermediate-range target, used remote tracking data provided by an Army/Navy Transportable Radar Surveillance – Model- 2 radar, and demonstrated support for European PAA Phase I. While the Aegis BMD program successfully conducted this test, there was an anomaly in a critical component of the SM-3 Block IA interceptor. Despite the anomaly, the interceptor was able to successfully intercept the target. In September 2011, the Aegis BMD program failed in its first attempted intercept of its SM-3 Block IB missile. During this test—named FTM-16 Event 2—a problem occurred in the interceptor and it failed to intercept the target. The Aegis program has had to add an additional flight test and delay multiple additional flight tests. Program management officials stated the SM-3 Block IA deliveries were suspended and the SM-3 Block IB production was slowed while the failure reviews are conducted. THAAD: The THAAD program also had some noteworthy testing accomplishments in 2011, successfully conducting its first operational flight test in October 2011. This test was a significant event for the program as it was designed to be representative of the fielded system with soldiers conducting the engagement. During the test, the THAAD system engaged and nearly simultaneously intercepted two short- range ballistic missile targets. However, THAAD also experienced a delay in its planned flight test schedule for fiscal year 2011. A flight test originally scheduled for the second quarter of fiscal year 2011 was delayed until fiscal year 2012 due to the availability of air- launched targets and then subsequently was canceled altogether. This cancellation has delayed verification of THAAD’s capability against a medium-range target. GMD: As has been the case since 2005, testing failures continue to affect the GMD program in fiscal year 2011. Specifically, as a result of the failed flight test in January 2010, MDA added a retest designated as FTG-06a. However, this retest also failed in December 2010 due to a failure in a key component of the kill vehicle. The GMD program has added two additional flight tests in order to demonstrate the Capability Enhancement II (CE-II) interceptor. However, since fiscal year 2009 MDA has already manufactured and delivered 12 interceptors, 2 of which have been used in flight tests, prior to halting further deliveries. The manufacture of components related to the failure and delivery of interceptors has been halted while the failure review and resolution actions are ongoing. MDA conducted a failure review investigation throughout fiscal year 2011 and concluded that the CE-II interceptor design does not work as intended and therefore required redesign and additional development. MDA is currently undergoing an extensive effort to overcome the design problem and return to intercept flight tests. According to a GMD program official, the program has already conducted over 50 component and subcomponent tests to develop a fix and verify the design. MDA also realigned resources from planned 2011 testing activities to fund the investigation and fund return-to- intercept activities including redesign efforts. For example, the program delayed funding the rotation of older fielded interceptors into flight test assets, delayed funding interceptor manufacturing, and delayed purchasing ground-based interceptor (GBI) upgrade kits. However, the agency did continue its efforts to increase reliability of the interceptors through upgrades and its repair of five interceptors to help mitigate the effects on the production line. MDA is planning on upgrading 15 interceptors between fiscal years 2013 to 2017. Additionally, MDA plans to refurbish five older interceptors between 2014 and 2017 to support flight tests. SM-3 Block IIA: MDA recognized that the program’s schedule included elevated acquisition risks and, as such, took actions in fiscal year 2011 to reduce those risks as well as potential future cost growth. The program planned to hold its system preliminary design review (PDR)—at which it would demonstrate that the technologies and resources available for the SM-3 Block IIA would result in a product that matched its requirements—but subsystem review problems for key components meant the system review had to be adjusted by 1 year. The program appropriately added time and money to its program by revising its schedule to relieve schedule compression between its subsystem and system-level design reviews and incorporated lessons learned from other SM-3 variants into its development to further mitigate production unit costs. The program still expects to meet the 2018 time frame for European PAA Phase 3. Models and simulations are critical to understanding BMDS capabilities. The complex nature of the BMDS, with its wide range of connected elements, requires integrated system-level models and simulations to assess its performance in a range of system configurations and engagement conditions. Assessing BMDS performance through flight tests alone is prohibitively expensive and faces safety and test range limitations that can best be dealt with through sound, realistic models and simulations. Ensuring that the models and simulations are sound and realistic requires a rigorous process to accomplish two main tasks: (1) developing individual system models and realistically linking those models and simulations and (2) gathering data from MDA’s ground and flight tests to feed into the models. MDA attempts to confirm that the models re-create the actual performance found in BMDS test events. The Operational Test Agency (OTA) independently assesses how realistic the models are in a formal process called accreditation. When a model is accredited it means that it can be trusted to produce high-confidence results for its intended use, and the limitations of the model are known. The development of reliable MDA models depends upon the collection of test data upon which to anchor the models. Because MDA had made very limited progress in identifying and collecting needed data, MDA’s test program was reoriented beginning in 2010 to enable the collection of data to support the development of BMDS models. With hardware-in-the-loop models, simulations are conducted with actual mission components/hardware in a laboratory environment, and the physical environment/conditions are simulated, under the control of computer equipment. MDA officials highlight that the framework is being used to evaluate BMDS performance in increasingly complex and realistic scenarios, employing greater numbers of BMDS assets. The process of developing and linking these models is extremely complex and difficult and will take many years to accomplish. In August 2009, the U.S. Strategic Command and OTA jointly informed MDA of 39 system- level limitations in MDA’s models and simulations program that adversely affect their ability to assess BMDS performance. Resolution of these 39 limitations, OTA maintains, would permit MDA’s models and simulations to provide more realistic representations of BMDS performance using the full complement of fielded BMDS assets. OTA officials have noted that since August 2009, MDA has partially or fully resolved 7 of these issues and identified technical solutions for 15 more. According to OTA officials, most of the resolved limitations are issues that are more easily addressed, such as the installation of improved communications systems and the provision of separate workstations for simulation controllers. No technical solutions have yet been identified for the remaining 17 of the 39 issues and OTA officials maintain that they are still awaiting an MDA timeline for the complete resolution of these remaining limitations. We reported, in 2009, problems with MDA’s model development and the lack of flight test data. In 2009, MDA undertook a new approach to test planning to focus the test program on gathering critical test data needed for modeling and simulation. Since 2009, MDA has bolstered efforts to collect test data for the BMDS model and simulation program; however, considerable effort and time are required to address all known shortfalls. Through its ongoing test data collection activities, MDA has collected 309 critical variables since 2009; however, those represent only 15 percent of the total needed. Flight test failures, anomalies and delays have reduced the amount of real-world data MDA expected. Additionally, some required data are difficult to collect, posing challenges even when a flight test is properly executed. When tests are carried out, considerable post-test data analysis is required for model development. Under the current plan, MDA does not foresee complete collection of data on these critical variables until sometime between 2017 and 2022. MDA has also made some limited progress in achieving partial accreditation for some BMDS models—ensuring that they are realistic and can be trusted and that their limitations are known. MDA models are accredited for specific functions for which they are to be employed. Over the past few years, OTA officials have performed assessments of MDA’s models and simulations and have noted that, amongst the element-level models, those for THAAD and Aegis BMD are farthest along. While MDA has made some progress toward accreditation of element models for specific functional areas, MDA has not yet achieved OTA accreditation in other key areas, such as any of the 18 environmental models. See appendix III for further details on MDA’s modeling and simulation efforts. To meet the 2002 presidential direction to initially rapidly field and update missile defense capabilities as well as the 2009 presidential announcement to deploy missile defenses in Europe, MDA has undertaken and continues to undertake highly concurrent acquisitions. For example, large-scale acquisition efforts were initiated before critical technologies were fully understood and programs were allowed to move forward into production without having tests completed to verify performance. Such practices enabled MDA to quickly ramp up efforts in order to meet tight presidential deadlines, but they were high risk and resulted in problems that required extensive retrofits, redesigns, delays, and cost increases. A program with high levels of concurrency (1) proceeds into product development before technologies are mature or appropriate system engineering has been completed or (2) proceeds into production before a significant amount of independent testing is conducted to confirm that the product works as intended. High levels of concurrency were present in MDA’s initial efforts and are present in current efforts. Recently, the agency has begun emphasizing the need to follow knowledge-based development practices, which encourage accumulating more technical knowledge before program commitments are made and conducting more testing before production is initiated. Developmental challenges and delays are to be expected in complex acquisitions, such as those for missile defense. However, when concurrency is built into acquisition plans, any developmental challenges or delays that do occur exacerbate the cost, schedule, and performance effects of those problems, particularly when production lines are disrupted or assets have already been manufactured and must be retrofitted. In 2009, we recommended that MDA synchronize the development, manufacturing, and fielding schedules of BMDS assets with the testing and validation schedules to ensure that items are not manufactured for fielding before their performance has been validated through testing. In response, DOD partially concurred with our recommendation, maintaining that MDA was pursuing synchronization of development, manufacturing, and fielding of BMDS assets with its established testing and validation requirements. However, because MDA continues to employ concurrent strategies, it is likely that it will continue to experience these types of acquisition problems. Concurrency is broadly defined as the overlap between technology development and product development or between product development and production of a system. The stated rationale for concurrency is to introduce systems in a timelier manner, to fulfill an urgent need, to avoid technology obsolescence and to maintain an efficient industrial development and production workforce. While some concurrency is understandable, committing to product development before requirements are understood and technologies mature as well as committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. At the very least, a highly concurrent strategy forces decision makers to make key decisions without adequate information about the weapon’s demonstrated operational effectiveness, reliability, logistic supportability, and readiness for production. Also, starting production before critical tests have been successfully completed has resulted in the purchase of systems that do not perform as intended. These premature commitments mean that a substantial commitment to production has been made before the results of testing are available to decision makers. Accordingly, they create pressure to keep producing to avoid work stoppages even when problems are discovered in testing. These premature purchases have affected the operational readiness of our forces and quite often have led to expensive modifications. In contrast, successful programs that deliver promised capabilities for the estimated cost and schedule follow a systematic and disciplined knowledge-based approach. This approach recognizes that development programs require an appropriate balance between schedule and risk and, in practice, programs can be executed successfully with some level of concurrency. For example, it is appropriate to order long-lead production material in advance of the production decision, with the pre-requisite that developmental testing is substantially accomplished and the design confirmed to work as intended. We have found that, in this approach, high levels of product knowledge are demonstrated at critical points in development. This approach is not unduly concurrent because programs take steps to gather knowledge that demonstrates that their technologies are mature, their designs are stable, and their production processes are in control before transitioning between acquisition phases. This knowledge helps programs identify risks early and address them before they become problems. It is a process in which technology development and product development are treated differently and managed separately. The process of technology development culminates in discovery—the gathering of knowledge—and must, by its very nature, allow room for unexpected results and delays. The process of developing a product culminates in delivery and therefore gives great weight to design and production. If a program is falling short in technology maturity, it is harder to achieve design stability and almost impossible to achieve production maturity. It is therefore key to separate technology from product development and product development from production—in other words, it is key to avoid concurrency when these transitions are made. The result of a knowledge-based approach is a product delivered on time, within budget, and with the promised capabilities. See figure 1 for depictions of a concurrent schedule and a schedule that uses a knowledge-based approach. In fiscal year 2011, due to flight test failures and a flight test anomaly, MDA suspended production of two interceptors—one in the GMD program and one in the Aegis BMD program—and slowed production of a third—in the Aegis BMD program. In addition, development problems with a key THAAD component disrupted that program’s interceptor production. MDA undertook a highly concurrent acquisition strategy to meet the President’s 2002 directive to deploy an initial set of missile defense capabilities by 2004. To do so, the GMD element concurrently matured technology, designed the system, tested the design, and produced and fielded a system. While this approach allowed GMD to rapidly field a limited defense that consisted of five CE-I interceptors and a fire control system, the concurrency resulted in unexpected cost increases, schedule delays, test problems, and performance shortfalls. Since then, MDA has produced and emplaced all of its planned CE-I interceptors. To address issues with the CE-I interceptors, MDA has undertaken an extensive retrofit and refurbishment program. Prior to MDA fully completing development and demonstrating the capability of the initial interceptor, MDA committed in 2004 to another highly concurrent development, production, and fielding strategy for an enhanced version of the interceptor—CE-II—as shown in figure 2. MDA proceeded to concurrently develop, manufacture, and deliver 12 of these interceptors before halting manufacture of components and delivery of interceptors in 2011 due to the failure in FTG-06a.had not successfully tested this interceptor, failing in both its attempts, it manufactured and delivered 12 of these interceptors. The discovery of the design problem while production is under way has increased MDA costs, led to a production break, may require retrofit of fielded equipment, delayed delivery of capability to the war-fighter, and altered the flight test plan. For example, the flight testing cost to confirm the CE-II capability has increased from $236 million to about $1 billion. In addition, the program will have to undertake another retrofit program, for the 10 CE-II interceptors that have already been manufactured. According to a GMD program official, although the full cost is currently unknown, he expects the cost to retrofit the CE-II interceptors to be around $18 million each or about $180 million for all 10. Intended to be ready for operational use in fiscal year 2009, it will now be at least fiscal year 2013 before the warfighter will have the information needed to determine whether to declare the variant operational. The GMD flight test program has been disrupted by the two back to back failures. For example, MDA has restructured the planned multiyear flight test program in order to test the new design prior to an intercept attempt. MDA currently plans to test the new design in a nonintercept test in fiscal year 2012. Because MDA prematurely committed to production before the results of testing were available, it has had to take steps to mitigate the resulting production break, such as accelerating retrofits to 5 of the CE-I interceptors. Program officials have stated that if the test confirms that the cause of the failure has been resolved, the program will restart the manufacturing and integration of the CE-II interceptors. According to MDA, because of the steps taken to develop and confirm the design change, a restart of the CE-II production line at that time will be low risk. However, while MDA has established a rigorous test plan to confirm that the design problem has been overcome, the confirmation that the design works as intended through all phases of flight, including the actual intercept, will not occur until an intercept test—FTG-06b—currently scheduled for the end of fiscal year 2012 or the beginning of fiscal year 2013. High levels of concurrency will continue for the GMD program even if the next two flight tests are successful. GMD will continue its developmental flight testing until at least 2022, well after production of the interceptors are scheduled to be completed. MDA is accepting the risk that these developmental flight tests may discover issues that require costly design changes and retrofit programs to resolve. As we previously reported, to date all GMD flight tests have revealed issues that led to either a hardware or software change to the ground-based interceptors.appendix VIII for more details on the GMD program. The SM-3 Block IB program, the second version of the SM-3 interceptor, is facing both developmental and production challenges that are exacerbated by its concurrent schedule, as shown in figure 3. This interceptor shares many components with the SM-3 Block IA, but the kinetic warhead is new technology that is being developed. The need to meet the presidential directive to field the Aegis BMD 4.0.1/SM-3 Block IB by the 2015 time frame for European missile defense is a key driver for the high levels of concurrency. In response to previous developmental problems and to prevent a production break, MDA has twice had to purchase additional SM-3 Block IA interceptors and faces a similar decision in fiscal year 2012. According to MDA, the additional SM-3 Block IA missiles were purchased to avoid a production gap as well as to keep suppliers active, and to meet combatant command SM-3 missile quantity requirements. The program, according to program management officials, was scheduled to purchase the last SM-3 Block IA in fiscal year 2010 and transition to procurement production of the SM-3 Block IB missiles in fiscal year 2011. MDA began purchasing the SM-3 Block IB in 2009 beyond the numbers needed for flight testing while a critical maneuvering technology was immature and prior to a successful flight test. According to the Director, MDA these missiles support development and operational testing; prove out manufacturing processes; provide information on reliability, maintainability and supportability; verify and refine cost estimates; and ensure that the missile will meet its performance requirements on a repeatable basis. MDA has determined that 18 of the 25 SM-3 Block IB missiles ordered are to be used for developmental testing; the remaining 7 interceptors are currently unassigned for tests and may be available for operational use. According to program management officials, these unassigned rounds represent a small portion of the total planned purchases. MDA is also planning to purchase 46 additional SM-3 Block IB missiles in fiscal year 2012. Meanwhile, testing has yet to validate the missile’s performance, the cause of the test failures is not yet determined, and remaining tests may not be completed until 2013. Consequently, purchasing additional interceptors beyond those needed for development remains premature. The first SM-3 Block IB developmental flight test failed in September 2011, and an anomaly occurred in an April 2011 flight test of the SM-3 Block IA. The flight test failure and the test anomaly occurred in components that are shared between the SM-3 Block IA and IB. Program officials are still investigating the reason for these failures. The program was unable to validate initial SM-3 Block IB capability during the failed September test, and program officials hope to conduct a series of three intercept tests in fiscal year 2012 needed to validate SM-3 Block IB capability. Depending on the timing and content of the failure review board results, this schedule could change further. Any SM-3 Block IB missiles ordered in fiscal year 2012 before mitigations for the anomaly and the failure, if needed, are determined and before the three flight tests confirm the design works as intended would be at higher risk of cost growth and schedule delays. In addition, SM-3 Block IB missiles already manufactured but not delivered also are at higher risk of requiring a redesign depending on the results of the failure review. Program management officials stated MDA has slowed SM-3 Block IB manufacturing until the outcome of the failure review board is known. It remains unclear whether the additional 46 missiles will be ordered before the failure reviews are complete and the interceptor is able to demonstrate that it works as intended. Recognizing the critical importance of the completing the planned fiscal year 2012 intercept tests, the operational need for SM-3 missiles, the relative success of the SM-3 Block IA, as well as the potential for a production break, the Senate Committee on Appropriations directed MDA to use the fiscal year 2012 SM-3 Block IB funds for additional Block IA missiles should the test and acquisition schedule require any adjustments during fiscal year 2012. However, a decision to purchase additional SM-3 Block IA missiles in fiscal year 2012 to help avoid a production break may be affected by the SM-3 Block IA failure investigation that has not yet been completed. Program management officials stated most deliveries of the SM-3 Block IA have been suspended pending the results of the failure review. See appendix IV for more details on the Aegis BMD SM-3 Block IB program. MDA awarded a contract to produce THAAD’s first two operational batteries in December 2006 before its design was stable and developmental testing of all critical components was complete. As a result, the THAAD program has experienced unexpected cost increases, schedule delays, test problems, and performance shortfalls. At that time, MDA’s first THAAD battery, consisting of 24 interceptors, 3 launchers, and other associated assets, was to be delivered to the Army as early as 2009. In response to pressure to accelerate fielding the capability, THAAD adopted a highly concurrent development, testing, and production effort that has increased program costs and delayed fielding of the first THAAD battery until early fiscal year 2012. (See fig. 4.) Problems encountered while THAAD was concurrently designing and producing assets increased costs by $40 million and caused slower delivery rates of both the first and second THAAD batteries. These batteries are not projected to be complete before July 2012—16 months after the original estimate of March 2011. While all assets except the interceptors were complete in 2010, the first operational interceptor for the first THAAD battery was not produced until the second quarter of fiscal year 2011. At the same time, MDA committed to purchasing more assets by signing a production contract for two additional THAAD batteries, despite incomplete testing and qualification of a safety device on the interceptor. During fiscal year 2011, after several production start- up issues, 11 of the expected 50 operational interceptors were Consequently, the first battery of 24 interceptors was not delivered.complete and available for fielding until the first quarter of fiscal year 2012—more than 2 years later than originally planned. The same issues have delayed the second battery as well. Although the launchers and other components for the second battery were completed in 2010, the full 50 interceptors necessary for both batteries are not expected to be delivered until July 2012. MDA has taken steps to incorporate some acquisition best practices in its newer programs, such as increasing competition and partnering with laboratories to build prototypes. However, the SM-3 Block IIB, Aegis Ashore, and the PTSS program acquisition strategies still include high or elevated levels of concurrency that set the programs up for increased acquisition risk, including cost growth, schedule delays, and performance shortfalls. SM-3 Block IIB: The program has high levels of concurrency because it plans to commit to product development prior to holding a PDR, as depicted in figure 5. The need to meet the 2020 time frame announced by the President to field the SM-3 Block IIB for European PAA Phase IV is a key driver for the high levels of concurrency. The program is following some sound acquisition practices by awarding competitive contracts to multiple contractors to develop options for missile configurations and mature key technologies as well as planning to compete the product development contract. However, while the program is holding a series of reviews that will provide engineering insight into the SM-3 Block IIB design, we have previously reported that before starting development, programs should hold key system engineering events, culminating in the PDR, to ensure that requirements are defined and feasible and that the proposed design can meet those requirements within cost, In addition, based on the schedule, and other system constraints.initial schedule developed by the program and prior history of SM-3 interceptor development, the SM-3 Block IIB program will need to commit to building the first flight test vehicle prior to holding the PDR in order to remain on the planned test schedule. According to MDA, this approach is a low risk development if the program is funded at requested levels. The agency stated that the achievement of an initial operating capability will be based on technical progress and execution of a “fly before buy” approach. Aegis Ashore: The program initiated product development and established a cost, schedule, and performance baseline early; included high levels of concurrency in its construction and procurement plan; and has not aligned its flight testing schedule with construction and component procurement decisions. The need to meet the 2015 time frame announced by the President to field the Aegis Ashore for European PAA Phase II is a key driver for the high levels of concurrency. The high levels of concurrency are depicted in figure 6. Aegis Ashore began product development and set the acquisition baseline before completing the PDR. This sequencing increased technical risks and the possibility of cost growth by committing to product development with less technical knowledge than recommended by acquisition best practices and without ensuring that requirements were defined, feasible, and achievable within cost and schedule constraints. The program has initiated procurement of components for the installation and plans to start fabricating two enclosures called deckhouses—one for operational use at the Romanian Aegis Ashore installation and one for testing at the Pacific Missile Range Facility—in fiscal year 2012, but does not plan to conduct the first intercept test of an integrated Aegis Ashore installation until fiscal year 2014. Further, the program plans to build the operational deckhouse first, meaning any design modification identified through system testing in the test deckhouse or the intercept test will need to be made on an existing deckhouse and equipment. As we have previously reported, such modifications on an existing fabrication may be costly. According to the Director of MDA, Aegis Ashore is a land adaptation of the Aegis weapons system sharing identical components. However, we previously have reported on the modifications to existing Aegis BMD technology that must be made to operate in a new land environment. In addition, some of the planned components for Aegis Ashore are being developed for future Aegis weapon system upgrades and are still undergoing development. Aegis BMD program management officials stated that the risks of concurrency in the program schedule are low due to the program’s reliance on existing technology and the ground testing that will be completed prior to the first intercept test. Nevertheless, the program has a limited ability to accommodate delays in construction or testing. PTSS: MDA approved a new acquisition strategy for PTSS in January 2012 that acknowledges some concurrency, but program officials stated that they have taken steps to mitigate the acquisitions risks and have worked to incorporate several aspects of acquisition best practices into the strategy. MDA plans to develop and acquire the satellites in three phases. First, a laboratory-led contractor team will build two lab development satellites. Second, an industry team, selected through open competition while the laboratory team is still in a development phase, will develop and produce two engineering and manufacturing development satellites. The two laboratory-built and the two industry-built development satellites are planned to be operational. Third, there will be a follow-on decision for the industry team to produce additional satellites in a production phase. While the strategy incorporates several important aspects of sound acquisition practices, such as competition and short development time frames, there remains elevated acquisition risks tied to the concurrency between the lab- and industry-built developmental satellites, as shown in figure 7. Because the industry-built developmental satellites will be under contract and under construction before on-orbit testing of the lab-built satellites, the strategy may not enable decision makers to fully benefit from the knowledge about the design to be gained from that on-orbit testing before making major commitments. See appendixes for more details on each program. MDA has a long history of pursuing highly concurrent acquisitions in order to meet challenging deadlines set by the administration. Concurrency can enable rapid acquisition of critical capabilities but at a high risk, particularly if technologies are not well understood at the outset of a program, requirements are not firm, and decisions are made to keep moving a program forward without sufficient knowledge about issues, such as design, performance, and producibility. In MDA's case, many of its highly concurrent acquisition programs began with many critical unknowns. While the developmental problems that have been discovered in these acquisitions are inherent in complex and highly technical efforts, the effects were considerably magnified due to the high levels of concurrency, including questions about the performance of fielded assets, significant disruptions to production, and expensive retrofits. While MDA has embraced the value of reducing unknowns before making key decisions in some of its newer programs, such as the SM-3 Block IIA, and adopted good practices, such as awarding competitive contracts to multiple contractors in the SM-3 Block IIB program, it has continued to plan and implement highly concurrent approaches in others. In fact, today, MDA is still operating at a fast pace, as production and fielding of assets remains, in many cases, ahead of the ability to test and validate them. As we recommended in 2009, these disruptions can only be avoided when the development, manufacture, and fielding schedules of BMDS assets are synchronized with the testing and validation schedules to ensure that items are not approved to be manufactured for fielding before their performance has been validated through testing. Moreover, as we have concluded for several years, while concurrency was likely the only option to meet the tight deadlines MDA has been directed to work under, having an initial capability in place should now allow the agency to construct acquisition approaches that are less risky from a cost, schedule and performance perspective. Near-term steps MDA can take to reduce cost, schedule, and performance risks include actions such as demonstrating the second GMD interceptor can work as intended before resuming production and verifying that the SM-3 Block IB completes developmental flight tests before committing to additional production. Longer-term solutions require the Office of the Secretary of Defense to assess the level of concurrency that currently exists within MDA programs and where that concurrency can be reduced. Moreover, while missile defense capabilities play a vital role in the United States' national security and international relationships, decisions about deadlines for delivering capabilities need to be weighed against the costs and risks of highly concurrent approaches. We recommend that the Secretary of Defense take the following seven actions to reduce concurrency and strengthen MDA’s near- and long-term acquisition prospects. To strengthen MDA’s near-term acquisition prospects, we recommend that the Secretary of Defense: For the GMD program, direct MDA to 1) demonstrate that the new CE-II interceptor design works as intended through a successful intercept flight test in the operational environment—FTG-06b—prior to making the commitment to restart integration and production efforts and 2) take appropriate steps to mitigate the effect of delaying the CE-II production restart until a successful intercept occurs. Specific consideration should be given by MDA to accelerating additional needed CE-I refurbishments. For the Aegis BMD program, direct MDA to 3) verify the SM-3 Block IB engagement capability through the planned three developmental flight tests before committing to additional production beyond those needed for developmental testing and 4) report to the Office of the Secretary of Defense and to Congress the root cause of the SM-3 Block IB developmental flight test failure, path forward for future development, and the plans to bridge production from the SM-3 Block IA to the SM-3 Block IB before committing to additional purchases of the SM-3 Block IB. For the SM-3 Block IIB program, direct MDA to 5) ensure that the SM-3 Block IIB requirements are defined and feasible and that the proposed design can meet those requirements within cost, schedule, and other system constraints by delaying the commitment to product development until the program completes a successful preliminary design review. To strengthen MDA’s longer-term acquisition prospects, we recommend that the Secretary of Defense: 6) Direct the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to review all MDA acquisitions for concurrency, and determine whether the proper balance has been struck between the planned deployment dates and the concurrency risks taken to achieve those dates. 7) Direct the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to review and report to the Secretary of Defense the extent to which the capability delivery dates announced by the President in 2009 are contributing to concurrency in missile defense acquisitions and recommend schedule adjustments where significant benefits can be obtained by reducing concurrency. DOD provided written comments on a draft of this report. These comments are reprinted in appendix II. DOD also provided technical comments, which were incorporated as appropriate. In responding to a draft of this report, DOD concurred with six of our seven recommendations and commented on actions in process or planned in response. In some cases, these actions are responsive to immediate problems, but do not appear to consistently address the implications for concurrency in the future. DOD concurred with our recommendation for the GMD program to demonstrate that the new CE-II interceptor design works as intended through a successful intercept flight test in the operational environment— FTG-06b—prior to making the commitment to restart integration and production efforts. In response to this recommendation, DOD stated that the program plans to restart the CE-II manufacturing upon successful completion of the FTG-06b flight test. This decision will reduce the risk of prematurely restarting CE-II production. DOD also concurred with our recommendation for the Aegis BMD program to verify the SM-3 Block IB engagement capability through the planned three developmental flight tests before committing to additional production, stating that the final decision to purchase SM-3 Block IB missiles with DOD-wide procurement funding will be made after the next three planned flight tests. We remain concerned that MDA is planning to purchase 46 additional SM-3 Block IB missiles prematurely using research, development, test, and evaluation funds in fiscal year 2012 before validating the performance of the missile and before determining the root cause of test failures—risking disrupting the supply chain if testing reveals the need to make design changes. We continue to believe that the program should not purchase any additional missiles, regardless of the type of funding used to purchase them, until the SM-3 Block IB’s engagement capability has been verified through the three developmental flight tests currently planned for the program. We have modified the recommendation to focus on verifying the capability before committing to additional production beyond the missiles needed for developmental testing. DOD concurred with our recommendation to direct the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to review all MDA acquisitions for concurrency, and determine whether the proper balance has been struck between the planned deployment dates and the concurrency risks taken to achieve those dates. In its response, DOD stated that it will wait until fielding dates are established to undertake concurrency assessments, and in the interim it will ensure that knowledge is gained to support capability deliveries. However, we remain concerned that DOD continues to focus on gaining key acquisition knowledge much later than needed. DOD’s approach is to understand the extent to which the design works as intended after committing to production—a high-risk strategy—rather than before committing to production. The assessment of concurrency should precede and should inform the setting of fielding dates. If the department waits until fielding dates are set to assess concurrency in the BMDS, it will miss the opportunity and accept the performance, cost, and schedule consequences. Our position is not unique in this regard. In recent testimony, the Acting Under Secretary of Defense for Acquisition, Technology and Logistics confirmed that excessive concurrency can drive cost growth and result in major schedule disruptions that produce further inefficiency. Noting that the acceptable degree of concurrency between development and production depends on a range of factors, including the risk associated with the development phase, the urgency of the need, and the likely impact on cost and schedule of realizing that risk, he stated that the Office of the Secretary of Defense intends to assess the levels of concurrency within programs, as our report recommends should be done for missile defense elements. DOD also concurred with our recommendation to direct the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to review and report to Secretary of Defense the extent to which the presidentially announced capability delivery dates are contributing to concurrency in missile defense acquisitions and recommend schedule adjustments where significant benefits can be obtained by reducing concurrency. DOD stated that the current missile defense program is structured to develop and field capabilities at the earliest opportunity while taking into account prudent risk management practices and executing a thorough test and evaluation program. The department further noted that when fielding dates are established, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics will review and report to the Secretary of Defense the extent to which presidentially announced capability dates may be contributing to concurrency in missile defense acquisitions and recommend schedule adjustments if significant benefits can be obtained by reducing concurrency. Given the amount of concurrency we have found in our reviews of the BMDS, we believe that significant benefits can be reaped if concurrency is assessed sooner rather than later. DOD partially concurred with our recommendation to report to the Office of the Secretary of Defense and to Congress the root cause of the SM-3 Block IB developmental flight test failure, path forward for future development, and the plans to bridge production from the SM-3 Block IA to the SM-3 Block IB before committing to additional purchases of the SM-3 Block IB. DOD commented that MDA will report the root cause of the SM-3 Block IB test failure and the path forward for future development to the Office of the Secretary of Defense and to Congress upon completion of the failure review in the third quarter of fiscal year 2012. However, DOD makes no reference to delaying additional purchases until the recommended actions are completed, instead stating that MDA is balancing the need to demonstrate technical achievement and also ensure that the system is thoroughly tested before fielding with the need to keep the industrial base and supply chain healthy to ensure that production transitions as quickly as possible. We believe that an appropriate balance between schedule and risk is necessary for development programs. However, our analysis has shown that MDA undertakes acquisition strategies of accelerated development and production that have led to disruptions in the supply chain and have increased costs to develop some BMDS assets. We maintain our position that MDA should take the recommended actions before committing to additional purchases of the SM-3 Block IB. We are sending copies of this report to the Secretary of Defense and to the Director of MDA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XII. To assess the Missile Defense Agency’s (MDA) cost, schedule, testing and performance progress, we reviewed the accomplishments of eight Ballistic Missile Defense System (BMDS) elements that MDA is currently developing and fielding: the Aegis Ballistic Missile Defense (Aegis BMD) with Standard Missile-3 Block IA and Block IB; Aegis Ashore; Aegis BMD Standard Missile-3 Block IIA; Aegis BMD Standard Missile-3 Block IIB; Ground-based Midcourse Defense (GMD); Precision Tracking and Space System (PTSS); Targets and Countermeasures; and Terminal High Altitude Area Defense (THAAD). We developed data collection instruments (DCI) that were completed by the elements’ program offices and reviewed the individual element responses. These instruments collected detailed information on schedule, cost and budget, contracts, testing and performance, and noteworthy progress during the fiscal year. We also examined the cost and resource, schedule, and test baselines as presented in the BMDS Accountability Report (BAR), Baseline and Program Execution Reviews, test schedules and reports, and production plans. The results of these reviews are presented in detail in the element appendixes of this report and are also integrated as appropriate in our findings. We also interviewed officials within program offices and within MDA functional directorates, such as the Directorates for Engineering and Testing. We discussed the elements’ test programs and test results with the BMDS Operational Test Agency and the Department of Defense’s Office of the Director, Operational Test and Evaluation. To assess whether MDA elements delivered assets and achieved self- identified capability goals as planned in fiscal year 2011, we examined the 2011 BAR, and compared it to the 2010 and 2009 versions, looking for similarities and differences between the three. We also reviewed MDA briefings to congressional staffers from March 2011 and responses to our DCIs, which detailed key accomplishments and asset deliveries for fiscal year 2011. To assess progress on MDA’s development of models and simulations, we held discussions with officials at the Missile Defense Integration and Operations Center, and the Operational Test Agency, and reviewed budget documents and MDA’s directive on modeling and simulation verification, validation, and accreditation. Our work was performed at MDA headquarters in Fort Belvoir, Virginia, and in Dahlgren, Virginia; Alexandria, Virginia; Falls Church, Virginia; Annapolis, Maryland; Colorado Springs, Colorado; Arlington, Virginia; and at various program offices and contractor facilities located in Huntsville, Alabama, and Tucson, Arizona. In Fort Belvoir, we met with officials from the GMD program office and the Advanced Technology Directorate who manage the Aegis BMD Standard-Missile 3 Block IIB program. In Dahlgren, we met with officials from the Aegis BMD program office, the Aegis Ashore program office, and the Aegis Standard-Missile 3 Block IIA program office. In Alexandria, we met with the Director, Operational Test and Evaluation, and officials from the Institute for Defense Analysis. In Falls Church, we met with officials from the PTSS program office. In Arlington, we met with the Director, Developmental Test and Evaluation, the Missile Defense Executive Board, officials in the Pentagon Office of Strategic Warfare, and the Cost Analysis and Program Evaluation group. In Annapolis, we met with officials from the Defense Spectrum Organization/Joint Spectrum Center. In Huntsville, we interviewed officials from the Airborne Infrared program office; the Terminal High Altitude Area Defense project office; the Targets and Countermeasures program office; and MDA’s Acquisitions Directorate, Programs and Integration Directorate, Engineering Directorate, Test Directorate, Cost Directorate, and Advanced Technologies Directorate. We also met with Boeing officials in Huntsville to discuss the failure review investigation for the FTG-06a failure, and their plan to resolve the resulting manufacturing stop. In addition, we met with officials from the Operation Test Agency in Huntsville to discuss MDA’s performance assessment, as well as models and simulations. In Colorado Springs, we met with officials from U.S. Northern Command, the Joint Functional Component Command for Integrated Missile Defense, and the Missile Defense Integration and Operations Center. We met with Raytheon and Defense Contract Management Agency officials in Tucson to discuss the manufacturing of the exoatmospheric kill vehicle and schedule issues for GMD, respectively. We conducted this performance audit from April 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Models and simulations are critical to understanding how capable the Ballistic Missile Defense System (BMDS) is and how well it can function. The complex nature of the BMDS, with its wide range of connected elements, requires integrated system-level models and simulations to assess its performance. Assessing BMDS performance through flight tests alone cannot be done, for it is prohibitively expensive and faces safety and test range limitations that can best be dealt with through sound, realistic models and simulations. Ensuring models and simulations that are sound and realistic requires a rigorous process to accomplish two main tasks—(1) developing individual element models and realistically linking those models and simulations and (2) gathering data from the Missile Defense Agency’s (MDA) ground and flight tests to feed into the models. The BMDS Operational Test Agency (OTA), an independent multi-service organization, then assesses how realistic the BMDS models are in order to accredit the models for use in simulating various levels of system performance.accredited it means that it can be reliably trusted to produce high- confidence results for its intended use and the limitations of the model are known. Since developing reliable MDA models depends upon the When a model is collection of test data upon which to anchor them, MDA’s test programplays a crucial role in model development and BMDS performance assessments. MDA’s models and simulations development effort is making progress in developing top-level planning documents, but two are not yet final. Two MDA planning documents, the Integrated Master Assessment Plan and the Integrated Models and Simulations Master Plan, are being developed to better focus and link the testing and assessment efforts. According to OTA officials, the Integrated Master Assessment Plan is based on sound methodology, which should improve MDA’s models and simulations program, in part by elevating BMDS evaluation and assessment requirements as the key driver of test design. OTA officials noted that the Integrated Models and Simulations Master Plan should also lead to a greater emphasis on model development needs in driving the design of MDA’s test events. The task of developing and linking the element-level models and simulations together into an integrated BMDS model is extremely complex and difficult and will take years to accomplish. Last year, we reported that the overall performance of the BMDS could not be assessed because MDA models and simulations had not matured sufficiently and may not be fully mature until 2017. Since that time, there has been limited progress in resolving model issues that would provide more realistic representations of BMDS performance. In August 2009, U.S. Strategic Command and OTA jointly informed MDA of 39 system-level limitations in MDA’s models and simulations program that adversely affect their ability to assess BMDS performance. Resolving these limitations, OTA maintains, would permit MDA’s models and simulations to provide more realistic representations of BMDS performance using the full complement of fielded BMDS assets. MDA officials have noted that since August 2009, MDA has fully resolved or is in the process of resolving 7 of these issues and has identified technical solutions for 15 more. According to OTA officials, most of the limitations resolved are issues that are more easily addressed, such as installing improved communications systems and providing separate workstations for simulation controllers. No technical solutions have yet been identified for the remaining 17 issues, and OTA officials maintain that they are still awaiting an MDA timeline for the complete resolution of these remaining limitations. Among the remainder are some critical model deficiency issues, which result in modeled performance that does not reflect realistic operation and conditions. For instance, models for certain radars have artificial limitations constraining data processing, so that a simulation involving high debris levels would effectively shut down the model. Another model limitation is the need for accurate interceptor modeling for all BMDS weapon systems in system-level assessments, the absence of which prevents a determination of engagement success in such simulations. MDA has made some progress in developing a single, integrated model and simulation approach for the BMDS. Originally, MDA’s models were developed for use by each element and not for integrated assessments. Since fiscal year 2010, MDA has made progress in creating a common framework, whereby the various BMDS element-level hardware-in-the- loop (HWIL) models are subjected to a common and consistent scene and environment during test events. MDA is now using this framework, known as the Single Stimulation Framework, in assessing BMDS performance, and MDA officials maintain that progress achieved in developing it has facilitated MDA’s efforts to resolve some of the 39 limitations. MDA officials further highlight that the framework is being used to evaluate BMDS performance in increasingly complex and realistic scenarios, employing greater numbers of BMDS assets. With HWIL models, closed loop simulations are conducted with actual mission components/hardware in a laboratory environment, and the physical environment/conditions are simulated, under the control of computer equipment. which was carried out in 2009. MDA officials have noted a downward trend in simulation trouble or incident reports for both the Single Stimulation Framework and the digital model. MDA plans to integrate these two efforts into a single Objective Simulation Framework (OSF). OSF is planned as an end-to-end representation of the BMDS in support of testing, training, exercises, and system development. OSF is scheduled to go online in the second quarter of fiscal year 2014, with the current digital simulation architecture phased out by fiscal year 2016. According to OTA officials, the common BMDS-level test framework that OSF is intended to provide has multiple advantages, such as the provision of a single tool with which to conduct data verification cross-checks. Additionally, this tool could serve to fill gaps that currently exist in the hardware-based models. MDA’s difficulty in executing the test plan has limited the progress of modeling and simulations. The agency has refocused the design of its test program on collection of test data to strengthen the development of the models. As we reported in 2010, MDA revised its testing approach in response to GAO and Department of Defense concerns and began to base test scenarios on identified modeling and simulation data needs. In order to collect data required to fill certain model data gaps, MDA had increased planned testing in certain areas, such as ground testing. However, according to OTA officials, MDA has had difficulty conducting its test plan, since actual test events are not always carried out in accordance with the schedule. We have also reported consistent problems in conducting tests over the past few years. Test schedule disruptions delay not only the MDA test schedule, but also the models and simulations’ efforts that depend on the test data. Despite MDA’s increased efforts to collect test data for the BMDS model and simulation program, it will take considerable effort and time to fill all knowledge gaps. MDA has succeeded in collecting some 309 critical variables since 2009; but, by the end of fiscal year 2011, those represented only 15 percent of the required total identified by MDA. Under the current plan, MDA does not foresee complete collection of these data until sometime between 2017 and 2022. Limited test data is a significant challenge MDA faces in developing accredited models. Flight test failures, an anomaly, and delays in fiscal year 2011 have reduced the amount of data MDA expected to have available to support the anchoring of its models and simulations. MDA officials also maintain that some required data are difficult to collect and are challenging to obtain even when a flight test is properly executed. When tests are carried out, considerable post-test data analysis is required for model development, MDA officials maintain. However, MDA officials indicated that MDA must often limit the scope of its analysis to discrete model development objectives. Because of the challenges in carrying out the full range of testing required to collect the anchoring data to develop models, MDA is concurrently exploring alternative methods for model development, such as greater use of subject matter experts. According to OTA officials, the subject matter experts focus MDA’s efforts toward scenario factors that are most important for actual and likely BMDS operation, thereby reducing the amount of testing data required. MDA has also made some limited progress in achieving partial accreditation for some BMDS models. MDA models may be partially accredited for some, but not all, intended functions due to limitations in the models or gaps in the data. Over the past few years, BMDS OTA officials have assessed MDA’s models and simulations in an effort to fully understand the performance of the current BMDS configuration, and have noted that among the element-level BMDS models, those for Terminal High Altitude Area Defense (THAAD) and Aegis Ballistic Missile Defense (Aegis BMD) are farthest along developmentally. In an April 2011 accreditation report, independent assessors from the Johns Hopkins University Applied Physics Laboratory found improvements in five of six functional areas for a key THAAD modeling tool, noting that available data permitted accreditation for three areas. The report also noted progress with two key Aegis BMD models, each of which was assessed for limited accreditation in two of four BMDS target negation areas. MDA officials have also noted significant progress in the development of a key model for the Command, Control, Battle Management, and Communications element of the BMDS. MDA has made some progress toward accreditation of BMDS element models for specific functional areas, but MDA officials acknowledged that the agency has not yet achieved OTA accreditation in other key areas, such as any of the 18 environmental models. While MDA has progressed in its use of simulated BMDS assessments, there are risks inherent in collecting information from unaccredited sources. Currently, both of the BMDS modeling and simulation frameworks rely on currently unaccredited models, despite the improvements that MDA has noted in the results of such assessments. OTA officials expressed lowered confidence in the data collected from such simulated assessments. The reliance on unaccredited models could result in poorly crafted tactics, techniques, and procedures and in the production and fielding of a system that is not able to actually counter real-world threats. As the BMDS matures and the number of fielded assets increases, modeling and simulation capabilities and laboratory representations of BMDS assets must keep pace to maintain operational realism. Aegis BMD made several significant accomplishments in fiscal year 2011. The Aegis BMD 4.0.1/SM-3 Block IB program successfully conducted simulated flight test FTM-16 E1 in March 2011, delivered the SM-3 Block IB pathfinder round to hold FTM-16 E2, and gained sufficient data in FTM-16 E2 in September 2011 to support certification of the Aegis BMD 4.0.1 weapon system, planned in the second quarter of fiscal year 2012. As for the Block IA interceptor, DOD fielded the Aegis BMD 3.6.1/SM-3 Block IA-equipped ship, U.S.S. Monterey, for Phase I of the European PAA in April 2011, meeting the 2011 time frame for deployment. During the fiscal year, MDA also installed one Aegis BMD 3.6.1 weapon system on a ship. In addition, the Aegis BMD program conducted a successful flight test of the Aegis BMD SM-3 Block IA, referred to as FTM-15, despite experiencing an anomaly during the test. The Aegis BMD SM-3 Block IA was also used in a Japanese flight test—JFTM-4—in which two U.S. Aegis BMD ships cooperated to detect, track, and conduct a simulated intercept engagement against the same target. Overall, the Aegis BMD 3.6.1/SM-3 Block IA program has had eight out of nine successful flight tests. In addition, Japanese Aegis BMD has conducted three out of four successful intercepts using SM-3 Block IA The Aegis BMD 3.6.1 weapon system was the first MDA interceptors.element to be assessed as operationally effective and suitable for combat by independent test officials, with limitations. Problems with concurrency are affecting the production of SM-3 Block IB interceptors and delaying the phaseout of the SM-3 Block IA production. The acquisition plan for the SM-3 Block IB interceptor includes high levels of concurrency—buying weapon systems before they demonstrate, through testing, that they perform as required—between development and production. Specifically, the program purchased interceptors before confirming that the design works as intended by completing developmental tests and prior to ensuring that a key subcomponent has overcome prior developmental problems. The need to field the Aegis BMD 4.0.1/SM-3 Block IB by the 2015 time frame for European PAA Phase II announced by the President is a key driver for the high levels of concurrency. According to MDA, the program is purchasing interceptors for a variety of reasons, including in support of developmental and operational testing, proving the manufacturing process, and ensuring the missile will meet its performance requirements on a repeatable basis. See figure 8 for a depiction of the SM-3 Block IB’s concurrent schedule. The SM-3 Block IB’s acquisition plan includes high levels of concurrency. We reported in February 2010 that planned interceptor production would precede knowledge of interceptor performance, and recommended that MDA delay a decision to produce interceptors to follow successful completion of developmental testing, a flight test, and manufacturing readiness review. In March 2010, we reported that the Aegis BMD program is putting the SM-3 Block IB at risk for cost growth and schedule delays by planning to begin manufacturing in 2010 before its critical technologies have been demonstrated in a realistic environment. We also reported in December 2010 that the SM-3 Block IB test schedule was not synchronized with planned production and financial Finally, in March 2011, we reported that the schedule had commitments.become even more compressed due to the redesign and requalification of a missile component, and in response, MDA deferred key program milestones so that it would have better informed production decisions. The program began production of SM-3 IB interceptors before resolving development issues with the TDACS, a key interceptor component that maneuvers the kill vehicle during the later stages of flight. The TDACS failed qualification testing in early 2010 and required a redesigned propellant moisture protection system. In order to hold the first SM-3 Block IB developmental flight test, FTM-16 Event 2, in September 2011 as scheduled, MDA only partially completed TDACS qualification testing and the version used in the failed flight test was not identical with the approved production design. The TDACS is expected to complete qualification testing in 2012; however, any additional issues discovered during qualification testing or developmental flight testing may require additional redesigns. The commitment to produce SM-3 Block IB interceptors beyond those needed for developmental testing was made before the program had a sufficient level of knowledge about the missile’s technology maturity and performance. MDA has determined that 18 of the 25 SM-3 Block IB missiles ordered are to be used for developmental testing. The remaining 7 interceptors are currently unassigned for tests and may be available for operational use. According to MDA, these interceptors will be used to support developmental and operational testing; to prove out the manufacturing processes; to provide information about reliability, maintainability, and supportability; to verify and refine cost estimates; and to ensure that the missile meets performance requirements. MDA officials acknowledged that missiles not consumed by testing could be used operationally. Program management officials stated that the unassigned missiles represent a very small portion of the total number of interceptors they plan to purchase, representing less than 5 percent of the total 472 interceptors that the program plans to purchase through fiscal year 2020. MDA decided that the risk was low given that many of the SM-3 Block IB critical technologies were based on critical technologies that were tested and used successfully by the SM-3 Block IA. MDA is also planning to purchase 46 additional SM-3 Block IB missiles in fiscal year 2012. However, there are two failure investigations ongoing that affect SM-3 Block IB production that could delay three planned developmental flight tests that need to occur to validate SM-3 Block IB capability. It therefore remains unclear whether the additional 46 missiles will be ordered before the failure reviews are complete and the interceptor is able to demonstrate that it works as intended through these flight tests. The program’s highly concurrent schedule is shaped primarily by the need to achieve initial capability for the fielding of Phase II of the European PAA by the 2015 time frame announced by the President. In addition, the program must be ready to participate in the second BMDS operational test in 2015. Program officials report that they are on track to achieve these time frames. However, until development is complete, any additional issues could lead to additional cost growth or schedule delays. The SM-3 Block IB failed its first developmental flight test, FTM-16 E2, leading to cost growth and schedule delays compounded by the disruption to ongoing production, the full extent of which has yet to be determined. During the flight test, the SM-3 Block IB experienced an unexpected energetic event in the third-stage rocket motor and failed to intercept a short-range ballistic missile target. Following the flight test, the program convened a failure review board to determine the root cause of the failure, modified the missile production contract, restructured the flight test program, and delayed key production decisions. While the failure review board is still investigating the flight test, MDA slowed production of SM-3 Block IB interceptors. The program had planned to deliver an additional three SM-3 Block IB missiles for flight testing in fiscal year 2011. However, the delivery of the remaining three has been delayed until spring 2012. Program officials estimate that the flight test failure—including the failure investigation, design modifications, testing, and requalification for return to flight—may cost approximately $187 million in fiscal year 2012. In addition, because officials are still investigating the cause of the flight test failure and how many already-produced missiles may have to be retrofitted, they do not yet know how much the retrofits, if required, will cost. At this point, the program does not have an approved plan to avoid an SM-3 production gap. The flight test failure also had several other consequences. The SM-3 Block IB manufacturing readiness review has been delayed from the second quarter of fiscal year 2011 to the third quarter of fiscal year 2012, and the procurement production decision for additional SM-3 Block IB missiles was moved from fourth quarter of fiscal year 2011 to the fourth quarter of fiscal year 2013. The failed flight test will be re-conducted in mid-2012, which may delay additional developmental flight testing. Aegis BMD’s transition to the SM-3 Block IB has been repeatedly disrupted because the transition was risky given the technology maturity of components developed for the SM-3 Block IB and the program’s concurrent schedule. Originally, MDA planned that production of SM-3 Block IA interceptors would end in fiscal year 2009 as production of SM-3 Block IB interceptors began. However, due to developmental issues with the SM-3 Block IB, MDA twice had to extend SM-3 Block IA production— in 2010 and 2011—to cover emerging production gaps with the SM-3 Block IB. To date, MDA has contracted for 41 more SM-3 Block IA missiles than originally planned in order to bridge the production gaps. Now, following the September 2011 flight test failure, MDA is facing another production gap. It is extending production once again—it purchased 23 SM-3 Block IA missiles in fiscal year 2011 and is considering whether to purchase additional SM-3 Block IA missiles in fiscal year 2012. In addition, the program has twice had to adjust the procurement of SM-3 Block IB missiles. Instead of purchasing 24 SM-3 Block IB missiles as planned in 2010, it purchased 18 SM-3 Block IA missiles and it did not procure 8 SM-3 Block IB missiles in 2011 as planned. To free up funding needed to improve TDACS operational suitability, MDA reduced the planned SM-3 Block IB missiles from 34 to 25 in fiscal year 2011. Thus far, the program has purchased 41 fewer missiles than previously planned. Due to the FTM-16 E2 developmental flight test failure, delivery of these SM-3 Block IB missiles is now being slowed until the failure review board completes its investigation and any possible retrofits are made. Despite the test failure and delivery hold, MDA is considering purchasing 46 SM-3 Block IB interceptors in fiscal year 2012 and 29 SM- 3 Block IB interceptors in fiscal year 2013. Recognizing the critical importance of the completing the planned fiscal year 2012 intercept tests, the operational need for SM-3 missiles, the relative success of the SM-3 Block IA, as well as the potential for a production break, the Senate Committee on Appropriations directed MDA to use the fiscal year 2012 SM-3 Block IB funds for additional Block IA missiles should the test and acquisition schedule require any adjustments during fiscal year 2012. As a result of an anomaly in the latest SM-3 Block IA flight test—FTM-15 in April 2011—MDA halted acceptance of SM-3 Block IA deliveries. During the April 2011 flight test, MDA demonstrated the Aegis BMD 3.6.1 weapon system’s ability to launch the SM-3 Block IA interceptor using data from a remote sensor against a separating intermediate-range ballistic missile target and the capability of the interceptor to engage threat missiles in the range expected for Phase I of the European PAA. However, although the SM-3 Block IA interceptor intercepted the target, it experienced an anomaly. The anomaly occurred in a component also used in the SM-3 Block IB. At the time of our review, the program had not completed its investigation into the cause of the anomaly or decided how it will address the issue. The program convened a failure review board, which has not yet completed its investigation of the root cause of the anomaly. Twelve assembled SM-3 Block IA missiles are not being accepted for delivery and are being held at the production factory until the investigation of the anomaly is complete and any possible refurbishments are made. This represents about 10 percent of the population of SM-3 Block IA missiles. Program management officials report that thus far seven missiles will need to be refurbished. Because the failure review board has not yet completed its investigation, an unknown quantity of additional SM-3 Block IA missiles may need to be refurbished due to the anomaly. At the time of our review, the program did not have an approved plan for how it will refurbish the affected missiles. Despite these issues, MDA purchased 23 SM-3 Block IA missiles in September 2011 and is considering whether to purchase additional missiles in 2012 to avoid production gaps and to keep SM-3 suppliers active. The SM-3 Block IIA is the third SM-3 version to be developed for use with the sea-based and future land-based Aegis BMD. This interceptor is planned to have increased velocity and range compared to earlier SM-3s due to a larger 21-inch diameter, more sensitive seeker technology, and an advanced kinetic warhead.will differ from the versions used in the SM-3 Block IB, so technology has to be developed for the majority of the SM-3 IIA components. The SM-3 Block IIA is expected to defend against short-, medium-, and intermediate-range ballistic missiles. Most of the SM-3 Block IIA components Initiated in 2006 as a cooperative development program with Japan, the SM-3 Block IIA program was added to the European Phased Adaptive Approach (PAA) in 2009. As part of European PAA Phase III, the SM-3 Block IIA is planned to be fielded with Aegis Weapons System 5.1 by the 2018 time frame and is expected to provide engage on remote capability, in which data from off-board sensors is used to engage a target, and expand the range available to intercept a ballistic missile. The program is managing both the development of the SM-3 Block IIA and its integration with Aegis Weapons System 5.1, which also is still under development. In this appendix, we evaluate only the SM-3 Block IIA. The program planned to hold its system preliminary design review (PDR)—at which it would demonstrate that the technologies and resources available for the SM-3 Block IIA would result in a product that matched its requirements—but problems with the reviews of key components meant the system review had to be adjusted by 1 year. To prepare for the system review, the program held 60 subsystem reviews, for its components to ensure that they were feasible given the technology and resources available. Two components—divert and attitude control system (DACS) and DACS propellant—failed their subsystem reviews and two components—nosecone and third stage rocket motor (TSRM)— had their reviews suspended, indicating that the technological capability of these critical components and SM-3 Block IIA requirements were mismatched. The program took steps to resolve each of the four subsystem review problems, including restructuring the program to reduce future acquisition risk. The DACS, used to adjust the course of the kinetic warhead, failed its subsystem review because it was not meeting weight and divert acceleration requirements, which the program resolved by reviewing and rebalancing subsystem requirements. The system-level DACS requirements did not change. The DACS propellant that failed the subsystem review was susceptible to a moisture problem, and the program selected a different propellant. The nosecone, which encloses the kinetic warhead, was overweight and could become more so, and the mitigation plan for the weight issue was insufficient. To resolve these issues, the program evaluated weight reduction opportunities and risks. The TSRM, used to lift the missile out of the atmosphere and direct the kinetic warhead to the target, was also not meeting weight requirements, and one of its components, the attitude control system, was not meeting thrust accuracy and alignment requirements. To resolve this issue, the program rebalanced subsystem requirements, but did not change system- level TSRM requirements. The subsystem review issues also required program schedule changes, which included the following: Adjusting the system PDR from January 2011 to March 2012. Splitting in two the critical design review (CDR), at which the program will determine that the product’s design matches the SM-3 Block IIA requirements and cost, schedule, and reliability goals. This led to schedule adjustments of 13 and 19 months, respectively, for each of the CDRs. Adjusting the interceptor flight test schedule. The program previously planned to hold its first intercept tests in fiscal years 2014 and 2015 as part of the co-development with Japan, but with the schedule adjustment, it will now have these tests in calendar year 2016.The United States and Japan finalized the development program restructuring on September 30, 2011. Despite this adjustment, the interceptor remains aligned with European PAA Phase III in the 2018 time frame. Aegis BMD program management officials stated that the subsystem PDR problems and subsequent program restructure may increase current program costs, but they are not certain how much because the completion contract, which will run through fiscal year 2017, was still being negotiated as of December 2011. The SM-3 Block IIA program took actions in 2011 that could reduce acquisition risk and mitigate future cost growth. Its previous schedule was compressed, which raised acquisition risk. For example, there was limited recovery time to investigate and resolve potential problems between program reviews as well as flight tests. The new schedule, made final in September 2011, relieves some compression concerns and adjusts to the subsystem review issues by adding time between the subsystem reviews and the system review to ensure that the technology issues are resolved. We have previously reported that reconciling gaps between requirements and resources before product development begins makes it more likely that a program will meet cost, scheduling, and performance targets, and programs that commit to product development with less technical knowledge and without ensuring that requirements are defined, feasible, and achievable within cost, schedule, and other system constraints face increased technical risks and possibility of cost growth. The new SM-3 Block IIA schedule allows the program to have more knowledge before committing to product development in the second quarter of fiscal year 2014, a strategy that may reduce future cost growth and development risks. The new schedule also adds flexibility in the test schedule by adding an option for a third controlled test vehicle flight if needed. If the first two test vehicles prove to be successful and a third is not needed, this test can be converted into the first intercept test of the SM-3 Block IIA. In addition to the schedule change, in fiscal year 2011 the program identified some steps to avoid the difficulties that affected SM-3 Block IB component production. For example, it found that using proven materials, standardizing inspections with vendors, and ensuring that designs included reasonable tolerances were practices to follow based on lessons learned from the SM-3 Block IB experience. Finally, the SM-3 Block IIA program identified alternatives to one advanced seeker component that it had identified, based on the experience of the SM-3 Block IB, as potentially increasing production unit costs by 5 percent. Program management officials stated that they identified a viable alternative for this component and worked with the SM- 3 Block IIB program to further develop manufacturing improvements for this technology. The program still faces significant technology development challenges. While the SM-3 Block IIA is a variant of the SM-3 missile, the majority of its components will change from their SM-3 Block IB configuration. The program must develop these components, some of which have consistently been technologically challenging for SM-3 development. In addition, two technology maturity challenges have emerged. Two critical technologies, the second and third stage rocket motors, experienced problems during testing that may require redesign and a potential CDR rescheduling. The program was investigating the problems and potential effects at the end of fiscal year 2011. In addition, following the subsystem review failure and selection of an alternate propellant, analysis of the DACS propellant performance showed that there may be a shortfall in divert performance for some missions. As of the end of fiscal year 2011, the program was still determining the extent of this issue. The SM-3 Block IIB is a planned interceptor for the Aegis BMD program that is intended to contribute to U.S. homeland defense by providing early intercept capabilities against some intercontinental ballistic missiles and regional defense against medium- and intermediate-range ballistic missiles. This interceptor has been described by the Missile Defense Agency (MDA) as critical to the Ballistic Missile Defense System (BMDS) and developing solutions to future BMDS capability shortfalls. The SM-3 Block IIB program began in June 2010 and entered the technology development phase in July 2011. Given its early stage of development, the SM-3 Block IIB does not have cost, schedule or performance baselines and is not managed within the Aegis BMD program office. Instead, this program has a tentative schedule and is being managed within MDA’s Advanced Technology office until a planned 2013 transition to the Aegis BMD program office. The SM-3 Block IIB is planned to be fielded by the 2020 time frame as part of the European Phased Adaptive Approach Phase IV. The program received a significant funding reduction in the fiscal year 2012 budget and, as of January 2012, was determining how to adjust its tentative schedule and future program plans. The program’s fiscal year 2012 budget request was reduced by $110 million to $13 million. The SM-3 Block IIB program is following a two-pronged development strategy. First, program officials have awarded competitive contracts to generate options for missile configurations and development plans. Second, in a separate effort, they are using multiple contractors to reduce risks by developing technologies that may be used in the SM-3 Block IIB and other SM-3 variants. The program awarded three concept definition and program planning contracts to define and assess viable missile configurations, conduct trade studies, and define a development plan. The contractors will develop alternative missile concepts, technologies and schedule for interceptor development beyond 2013. According to the program, the purpose of this competition is to minimize cost, schedule, and technical risks. There will be another competition to select one contractor for the product development phase in 2013. We have reported previously that competition among contractors can result in increased technological innovation that leads to better and more reliable products. Program management officials have issued a tentative schedule beyond the technology development phase, but this plan, if implemented, includes high levels of concurrency and acquisition risk. We have previously reported the following: Concurrency leads to major problems being discovered in production, when it is either too late or very costly to correct them. The focal plane array is a component of the seeker. Before starting product development, programs should hold key engineering reviews, culminating in the preliminary design review (PDR), to ensure that the proposed design can meet defined, feasible requirements within cost, schedule, and other system constraints. Committing to production and fielding before development is complete is a high-risk strategy that often results in unexpected cost increases, schedule delays, test problems, and performance shortfalls. Successful defense programs ensure that their acquisitions begin with realistic plans and baselines before the start of their development. According to the tentative SM-3 Block IIB schedule, the product development decision will occur before the March 2015 PDR. As a result, MDA is planning to commit to developing a product with less technical knowledge than our prior work has shown is needed and without fully ensuring that requirements are defined, feasible, and achievable within cost, schedule, and other system constraints. This sequencing increases In addition, the both technical risks and the possibility of cost growth.program will not have a stable design when it must commit to building flight test vehicles. According to acquisition best practices, a design is considered stable when the technologies are mature and the critical design review (CDR) confirms that at least 90 percent of the drawings are releasable for manufacturing. Based on the experience of other SM-3 interceptors, the program must commit to produce flight test interceptors 2 years before the March 2016 first flight. However, this timeline means the commitment to a flight test vehicle would occur a year before the SM- 3 Block IIB PDR has confirmed that the design is feasible and more than a year and a half before CDR has confirmed that the design is stable. See figure 9 for a depiction of the tentative SM-3 Block IIB schedule. Program management officials stated that they have taken steps in the tentative schedule that reduce acquisition risk. According to SM-3 Block IIB program information, the tentative schedule is based on the experience of programs with similar magnitude and complexity, and the concept definition and program planning contractors will develop detailed product development schedules that will help refine the program schedule. Further, activities during the technology development phase, such as evaluating the performance of multiple contractor concepts, simulations conducted by the contractors, and affordability assessments, are designed to reduce risk in SM-3 Block IIB development. In addition, while the program plans to hold its production development decision prior to the PDR, it will hold a series of reviews with the concept definition contractors to receive engineering insight into each contractor’s plans. Program management officials told us they also plan to hold a government-only system requirements review prior to the initiating the product development contract competition. This review is planned to confirm that SM-3 Block IIB has specific technical requirements that the developer can use to establish a product baseline as well as conduct a risk and technology readiness assessment. Another key step for successful programs is ensuring that only mature technologies are brought into product development. MDA has identified technologies that are important for SM-3 variants and is investing in these technologies, particularly the less mature technologies, to facilitate SM-3 Block IIB development. However, as of October 2011, the program had not named specific critical technologies for the SM-3 Block IIB. Program officials stated that they do not plan to do so until the product development decision. The concept definition contractors are required to identify technology investments to increase the maturity of the technologies by demonstrating them in a relevant environment by the end of fiscal year 2013, which coincides with the product development decision. MDA, however, does not require that a program mature technologies to this level by this decision. Without knowing the specific critical technologies, it is not possible to identify the risk of including them in the product development phase. As we have previously reported, including immature technologies in product development can lead to delays and contribute to cost increases. While the program has proposed that $1.673 billion in research and development funding is needed from fiscal years 2012 to 2016, a full program acquisition cost has not yet been developed. Given the early stage of the program, and that key decisions about requirements and the missile configuration have not been made, a full acquisition cost estimate is not currently feasible. According to MDA, the program plans to complete a detailed cost estimate prior to entering product development. A cost estimate cannot be developed until key acquisition decisions are made. Program management officials stated that warfighter and system requirements for the SM-3 Block IIB have not been set, and discussions about the delivery schedule beyond the initial capability are ongoing. Further, whether the propellant will be liquid or solid, the SM-3 Block IIB’s diameter, and whether modifications must be made to a vertical launch system are not yet known given the early stage of the program. In addition, as there is not yet a final schedule, the currently proposed funding is not informed by a complete post-product development decision schedule. Program management officials note that these key decisions are being informed by activities occurring during the technology development phase, such as trade studies involving the propulsion and missile diameter, and they are updating current cost estimates as they receive information from contractors as well as working on developing detailed cost estimates. MDA determined that a key goal for the SM-3 Block IIB is to provide an early intercept capability. However, a recent Defense Science Board study suggested that other capabilities are more important than early intercept. The study concluded that early intercept capability is not useful for regional missile defense. Further, while early intercept with shoot-look- shoot capability could be part of a cost-effective defense of the U.S. homeland if a sufficiently fast missile was available, the size of the battlespace and not early intercept capability is the key driver of cost- effectiveness. In addition, it is unclear if early intercept is possible for defense of the U.S. homeland due to the velocity required for an early intercept of an intercontinental ballistic missile aimed at the United States and the state of current missile technology. Finally, the value of a shoot- look-shoot capability relies on a robust ability to determine if the first missile was successful, often called kill assessment, but this ability has not been established. In response, MDA stated that the Defense Science Board study had used a limited definition of early intercept and ignored significant benefits of the strategy that stem from decreasing the time available to the adversary to deploy countermeasures. Such benefits include providing a longer viewing time of deployment maneuvers for forward-based sensors, reducing the flight time of the interceptor, and increasing the complexity to the attacker of deploying countermeasures. The program office did not conduct a formal analysis of alternatives to compare the operational effectiveness, cost, and risks of a number of alternative potential solutions to address valid needs and shortfalls in operational capability prior to embarking on the technology development phase. The program did assess some missile concepts for early intercept capability in a review that was not a formal analysis of alternatives. The program currently plans to conduct engineering and trade studies— including cost trades—that will be completed in the fourth quarter of fiscal year 2012 and review additional alternative concepts as part of the concept definition process. While MDA programs are not required to conduct an analysis of alternatives, we have previously reported that it is key to planning and establishing a sound business case. Specifically, an analysis of alternatives provides a foundation for developing and refining the operational requirements for a weapons system program and provides insight into the technical feasibility and costs of alternatives. Further, without a full exploration of alternatives, the program may not achieve an optimal concept that satisfies the warfighter’s needs within Without this sound basis for program available resource constraints.initiation, the SM-3 Block IIB is at risk for cost and schedule growth as well as not meeting the warfighter’s needs with the resources available. Given the commitment to field Aegis Ashore by the 2015 time frame, the program’s schedule contains a high level of concurrency—buying weapon systems before they demonstrate, through testing, that they perform as required—between development and production. The program began product development early, included high levels of concurrency in its construction and procurement plan, and has not aligned its testing schedule with component procurement and construction. As we have reported previously, an acquisition strategy for accelerated fielding, such as that of Aegis Ashore, will likely accept higher risk primarily through concurrent development and production.problems are more likely to be discovered in production, when it is either too late or very costly to correct them. Under such a strategy, major The program began product development and established the Aegis Ashore cost, schedule, and performance baseline in June 2010, which was 14 months before completing its preliminary design review. This concurrent sequencing can increase technical risks and the possibility of cost growth by committing to product development with less technical knowledge than needed by acquisition best practices and without ensuring that requirements are defined, feasible, and achievable within cost and schedule constraints. In addition, the program has a concurrent schedule for constructing deckhouses and procuring Aegis Ashore components. Since committing to product development and establishing the product development baseline, the acquisition strategy for deckhouse construction has been revised twice. The current plan, called the dual deckhouse plan, is to construct two deckhouses—first, an operational deckhouse planned for installation in Romania and a second for developmental testing in Hawaii. The test deckhouse will begin construction a quarter later than the operational deckhouse and will be installed for testing at the Pacific Missile Range Facility in Hawaii. Aegis BMD program management officials stated that a third deckhouse, for the Aegis Ashore installation in Poland, will be constructed at a later date to be set based on funding availability. The program also has initiated procurement of equipment, such as the VLS and SPY-1 radar that are needed for the Aegis Ashore installations. This plan means that knowledge gained from testing the Hawaiian installation cannot be used to guide the construction of the Romanian deckhouse or procurement of components for operational use. Any design changes that arise from testing in Hawaii will have to occur on a complete deckhouse and on already procured components intended for operational use. As we have previously reported, rework on an existing fabrication is costly. Aegis Ashore is currently scheduled to participate in four flight tests, three of which are intercepts, with the first intercept flight test scheduled for the second half of fiscal year 2014, at which point two of the three deckhouses will be completed and Aegis Ashore site construction and interceptor production will be well under way. The final flight test is planned for the fourth quarter of fiscal year 2015. See figure 10 for a depiction of Aegis Ashore’s concurrent schedule. However, Aegis BMD program management officials state that Aegis Ashore has taken steps to lower the acquisition risks. First, the officials note that the program is using components already in use aboard Aegis BMD ships, reducing the technical risk of the program. The Director of MDA has stated that the sea-based system and Aegis Ashore will share identical components. According to program documentation, the dual deckhouse plan reduces risk and creates fabrication and construction efficiencies. Aegis BMD program management officials noted that the dual deckhouse plan has significant advantages over prior plans, all of which had the operational deckhouse built before the test deckhouse. For example, they noted that prior Aegis Ashore deckhouse construction plans required testing a different deckhouse design in Hawaii than the one that would be used at the operational sites. Constructing two deckhouses concurrently provides for greater efficiency in purchasing material and equipment and allows for one contractor to build both deckhouses. The Director of MDA stated that the deckhouse construction methodology is the most cost effective and efficient under the program’s time constraints. In addition, the program expects to be able to modify the operational deckhouse prior to its installation in Romania if flight tests reveal that a modification is needed. The program management officials also stated that the dual deckhouse plan provides more time for testing the equipment that goes in the deckhouse. Aegis BMD program management officials stated that this plan allows them to test the electrical system in the Romanian deckhouse and to complete these tests more than 1 year earlier than previously scheduled. Finally, they noted that constructing two deckhouses also facilitates testing, including conducting Aegis Light Off events that consist of preflight test verification of the integration of Aegis Ashore components. Aegis BMD program management officials told us that the schedule does contain more risk before the first controlled test vehicle flight test, which is the first time all of the Aegis Ashore components will be integrated, and less risk between that test and the fielding in Romania. They stated that they decided to increase the risk at the start of the schedule in order to meet the presidentially announced date of 2015 for the first Aegis Ashore installation. While Aegis BMD program management officials are confident that the risks of a concurrent schedule are low given the nature of the Aegis Ashore program, the short time frame for integrating and fielding Aegis Ashore could magnify the effects of any problems that may arise. Program documentation states that there is limited to no margin in the schedule to deal with possible delays in fabrication or system testing, and as this effort is the first time a land-based deckhouse has been constructed, there is no prior experience on which to draw to alleviate any schedule delays. While Aegis Ashore will use components already developed and used operationally in the sea-based Aegis BMD, key components—the VLS and radar—will be modified for use on land. In addition, the multimission signal processor, a key component for both the sea-based and land- based system that processes radar inputs from ballistic and cruise missile targets, is still under development and behind schedule. The first time all of the Aegis Ashore components are expected to be integrated and flight tested will be in fiscal year 2014. Given the concurrent schedule for the program, any difficulties with the modified components or partly developed components may affect the overall schedule, potentially leading to cost growth or an installation not meeting expectations because a needed modification was discovered too late. The Aegis Ashore installations will include a VLS currently used on Aegis BMD ships, but it is planned to be located at a greater distance from the deckhouse. The communications system between the deckhouse and the VLS will require modification because of this increased distance. In addition, the VLS is planned to be surrounded by an environmental enclosure at Aegis Ashore installations. Aegis BMD program management officials stated that this enclosure will include the heating and cooling system and provide power to the launcher. Testing of this modification is planned for fiscal year 2014. Aegis Ashore’s SPY-1 radar likely will face challenges related to the radio-frequency spectrum, which is used to provide an array of wireless communications services, such as mobile voice and data services, radio and television broadcasting, radar, and satellite-based services. The radar might need to be modified if the performance of wireless devices in Romania is degraded by the SPY-1. Furthermore, Romania’s future use of the radio-frequency spectrum is unknown but could allow more domestic wireless communications services to operate in or near the radar’s operating frequency. Consequently, the Aegis Ashore site may need modifications to resolve this potential issue, or alternatively, Romanian wireless broadband devices may need to be modified. An initial analysis of radio-frequency spectrum use in Romania by the Defense Spectrum Organization, DOD’s organization that provides information and assistance on radio frequency analysis, planning, and support, recommended to MDA that additional study of Romanian radio- frequency spectrum use occur. Aegis BMD management officials told us that they recognize the risks associated with operating the SPY-1 radar on land and that MDA plans additional study in fiscal year 2012 to better understand Romanian spectrum use and the potential effect of the SPY-1 radar on land, including study of existing land-based SPY-1 radars. There may be modifications to the SPY-1 radar to mitigate this potential issue, but the officials told us they do not currently know what modifications could be required to mitigate any frequency issues because of this need for further study. Depending on spectrum policy and usage in the host nation, this issue may be a long-term challenge over the life of the Aegis Ashore installations regardless of where they are fielded. In addition, urban clutter—which could affect the ability to acquire, maintain track, and perform imaging on long-range targets—could affect the SPY-1 radar. Program documentation states that both the Romanian and Polish Aegis Ashore sites have clutter from urban structures and wind farms. Urban clutter may require modifications of the radar, such as software modifications, or may require additional testing or affect operations of the Aegis Ashore installation. In addition to the aforementioned VLS and radar issues, developmental uncertainties also exist for the multimission signal processor. We have previously reported that it is behind schedule, with a significant percentage of its software increments still needing to be integrated.component of Aegis Ashore was unable to demonstrate planned functionality for a radar test event in December 2010, and the Defense Contract Management Agency has identified the multimission signal processor schedule as high risk. As we have reported previously, Aegis Ashore is dependent upon next generation versions of Aegis systems—Aegis 4.0.1 and Aegis 5.0—as well as the SM-3 Block IB interceptor, all of which are still under development. Aegis Ashore’s requirements, acquisition strategy and overall program content were not stable when the resource baseline—the expected investment in the development and delivery of a product—was established, and subsequent program changes obscure the assessment of program progress. MDA’s acquisition directive states that baselines are used to assess programs and program maturity. We have previously reported that baselines provide the best basis for transparency over actual program performance, giving decision makers key information about program progress and cost. Baseline variances give management information about where corrective action may be needed to bring the program back on track. Variation from the baseline can provide valuable insight into program risk and its causes and can empower management to make decisions about how to best handle risks. However, this transparency is limited if the initial baseline is not sound or if the reporting of progress against the baseline obscures actual program cost or performance. Aegis Ashore’s resource baseline, established at the developmental baseline review on June 22, 2010, was initially $813 million. The initial resource baseline established the resources needed to develop and build two Aegis Ashore systems—one test and one operational— and deploy them in the 2015 time frame. In the June 25, 2010 BMDS Accountability Report (BAR) submitted to Congress 3 days after the review, MDA reported a revised resource baseline of $966 million, an increase of $153 million or 19 percent. According to information provided by the program, the reason for the increase was a refinement of the program requirements and a review of resource estimates provided earlier in fiscal year 2010. Beyond this resource baseline adjustment, the anticipated cost of the program has grown as program plans have developed. By February 2012, program management officials provided information that the program was reporting a cost growth of $622 million over the 2010 baseline for a total cost estimate of $1.6 billion. Aegis BMD management officials provided information attributing the cost growth to changes in the deckhouse fabrication plans, an increase in the cost of the Aegis Weapons system, and a refinement of equipment needs. In addition, the program has adjusted the calculations for the average procurement unit cost (APUC), or the ratio of procurement costs to the number of operational units, across the life of the program. At the developmental baseline review in June 2010, the APUC was based on the test installation in Hawaii. By June 2011, the program included two installations—for Romania and Poland—in the APUC. However, at the end of fiscal year 2011, the program changed the quantity to one Aegis Ashore installation. Information provided by the program office states that the increase to two installations occurred due to the addition of all European PAA phases to the program during the year and that the fiscal year 2012 BAR will include only one installation to be consistent with the 2011 BAR. The current estimate for the APUC also has changed. The baseline for the average procurement cost is $272 million for each Aegis Ashore system. Program management officials reported that by February 2012, the estimate for the APUC was $380 million, a 40 percent increase over the baseline unit cost. Appendix VIII: Ground-based Midcourse Defense (GMD) MDA has not successfully demonstrated the ability of the CE-II to intercept a target. The first two attempts failed—the first in January 2010 due to a quality control issue and the second in December 2010 due to a design issue. During this second attempted test, MDA launched an intermediate-range target with a simulated reentry vehicle and associated objects. A forward-based radar provided acquisition and track data to the GMD system. In addition, the Sea-based X-band radar provided discrimination data to the GMD system. The GMD interceptor was launched from a silo at Vandenberg Air Force Base, flew as expected to its designated point, and deployed the CE-II EKV, which reached the target and identified the most lethal object but failed to intercept it. After this failure, the Director, MDA, testified that the agency’s top priority was to confirm the root cause, fix it, and successfully repeat the previous flight test. Accordingly, MDA undertook an extensive and rigorous effort to determine the root cause of the failure and develop design solutions to resolve the failure. The investigation concluded the following: (1) ground testing cannot replicate the environment in which the kill vehicle operates and (2) the CE-II EKV, specifically the inertial measurement unit, requires redesign and additional development, which MDA has undertaken. For example, according to a GMD program official, the program has conducted over 50 component and subcomponent failure investigation and resolution tests. Additionally, the program has developed new testing techniques and special instrumentation to provide additional data in future flight tests. MDA realigned resources from planned 2011 activities to fund the investigation and fund return-to-intercept activities including redesign efforts. For example, the program delayed funding the rotation of older fielded interceptors into flight test assets, delayed funding interceptor manufacturing, and delayed purchasing GBI upgrade kits. However, the agency did continue its efforts to increase reliability of the interceptors through upgrades and repair of five interceptors although the refurbishments conducted to date do not fix all known issues or provide a guarantee of reliability. The cost to confirm the CE-II capability through flight testing has increased from $236 million to about $1 billion dollars due to the flight test failures as noted in table 4. In addition to the costs of the actual flight tests, the total cost for determining the root cause and developing the design changes has not been fully developed. While the cost incurred by MDA to verify the CE-II variant through flight testing, as noted above, is about $1 billion, it does not reflect the costs already expended during development of the interceptor and target. For example, the cost of the flight test excludes nonrecurring development costs, such as the development costs for the interceptor or target and its support as well as those for systems engineering and test and evaluation, among others. Often these are costs that were incurred many years before the flight test was conducted. MDA has not separately reported the nonrecurring development costs for the CE-II interceptor, but instead reports the program acquisition unit costs (which are the development, production, deployment, and military construction costs divided by the total number of operationally configured units) for the combined CE-I and CE-II interceptor effort. For these interceptors, the program acquisition unit costs are reported to be $421 million as of February 2011 and are likely increasing to address the flight test failure. MDA reports the nonrecurring costs for the targets used in these flight test as $141 million each. Consequently, including nonrecurring development costs for both the CE-II and the targets would substantially increase the costs for each flight test and the overall costs outlined in table 4. To meet a 2002 presidential directive to deploy an initial missile defense capability by 2004, MDA concurrently matured technology, designed the element, tested the design and produced and fielded an initial capability. A 2008 MDA briefing acknowledged that fielding while still in the development and test phase led to very risky decisions regarding schedule, product quality, and program cost. For example, the EKV team focused on technical aspects of design instead of also ensuring that the design could be produced, which led to a lack of production control and near continuous engineering changes. While this approach did lead to the rapid fielding of a limited defense, it also resulted in schedule delays, unexpected cost increases, a refurbishment program, and a reduced knowledge of system reliability necessary for program sustainment, as well as variations between delivered CE-I EKVs. (See fig. 11.) MDA emplaced its first GBI in 2004, although it had little of the data, such as interceptor reliability, that it would normally have had before fielding a system. Accordingly, the Director, MDA, testified on March 2011 that GMD put interceptors “that are more akin to prototypes than production representative missiles in the field.” Additionally, interceptors were emplaced in silos before successfully conducting a flight test of this configuration. In 2004, MDA committed to another highly concurrent development, production, and fielding strategy for the new CE-II interceptor, approving the production before completing development of the prior version or flight testing the new components. MDA proceeded to concurrently develop, manufacture, and deliver 12 of these interceptors even though MDA has not yet successfully tested this new version. MDA’s concurrent approach to developing and fielding assets has disrupted its acquisition efforts, resulted in cost growth and expensive retrofits, and reduced the planned knowledge of the system’s capabilities and limitations. In response to the failure of FTG-06a in December 2010, MDA restructured its fiscal year 2011 manufacturing plan by halting deliveries of remaining CE-II EKVs until the completion of the failure review and a nonintercept attempt in fiscal year 2012. To help mitigate the affect of the production halt, the GMD program planned to perform five limited upgrades to previously manufactured CE-I interceptors. According to contractor officials, in order to keep the production line viable, they were directed to complete five limited interceptor upgrades; however, the program was only able to complete three and expects to complete the other two in fiscal year 2012. As we previously reported, in 2007 MDA began a refurbishment and retrofit program of the CE-I interceptors to replace questionable parts identified in developmental testing and manufacturing.develop an overall plan to address known hardware upgrades and service life limitations, issues discovered since the interceptors were emplaced. However, MDA has yet to complete all planned refurbishments of CE-I EKVs, and program officials discovered additional problems during early refurbishments causing MDA to expand this effort. Consequently, refurbishments are planned to continue for many more years and the cost to refurbish each CE-I interceptor could range from $14 million to $24 million. Additionally, MDA will have to undertake a major retrofit program for the CE-II EKVs that have already been manufactured and delivered in addition to the retrofit program for the CE-I GBIs that is already underway. According to GMD program management officials, the final cost for this effort has not been determined, but they expect the effort to cost about $18 million per EKV, resulting in an additional cost of about $180 million for 10 interceptors. The agency has also had to restructure its flight test program, adding two tests that were not previously planned before the failure. To verify the new design of the kill vehicle, MDA inserted a nonintercept test scheduled for the third quarter of fiscal year 2012. This test is designed to exercise as many CE-II EKV functions as possible that have not been demonstrated in either FTG-06 or FTG-06a. Performing the nonintercept mission, using an upgraded inertial measurement unit, provides the benefit of scripting the test in order to best stress the EKV design and to fully demonstrate the resolution of the failure in FTG-06a. MDA officials have stated that if the test confirms that the cause of the failure has been resolved, the program will restart the manufacturing and integration of the CE-II EKVs. However, successfully completing an intercept that demonstrates the full functionality of the kill vehicle is necessary to validate that the new design works as intended. MDA added a new intercept flight test (FTG-06b) in the fourth quarter of fiscal year 2012, however due to further developmental challenges with the EKV, it has been delayed until at least the second quarter fiscal year 2013 to demonstrate CE-II intercept capability and achieve the unmet objectives of the two previous tests (FTG-06 and FTG-06a). As a result, confirmation that the design works as intended will take place more than 9 years after the decision to begin production and more than 4 years after the first planned test. Lastly, MDA’s continued inability to conduct the GMD developmental flight testing has resulted in less knowledge of the fielded systems capabilities and limitations than planned. For example, GMD has been only able to successfully conduct two intercept tests since 2006—the last successful Additionally, GMD has yet intercept being conducted December 2008. to conduct a salvo test. As we reported in our last assessment, GMD cancelled its planned 2011 salvo test due to the failure in the January 2010 flight test and scheduled a salvo test for fiscal year 2015. GMD conducted FTG-03a in September 2007 and FTG-05 in December 2008. Consequently, neither the CE-I nor CE-II variant capability is fully understood and according to the Director, Operational Test and Evaluation’s fiscal year 2010 assessment, the continuing evolution of the interceptor design has resulted in multiple interceptor configurations among the fielded interceptors and test assets. These configuration differences complicate assessment of operational capability. GMD’s acquisition strategy will continue its high levels of concurrency. Developmental flight testing will continue through 2022, well after the currently planned completion of production. In following this concurrent acquisition strategy, the Department of Defense is accepting the risk that these later flight tests may discover issues that require costly design changes and retrofit programs to resolve. Appendix IX: Precision Tracking Space System (PTSS) The Missile Defense Agency’s (MDA) PTSS is being developed as a space-based infrared sensor system to provide persistent overhead tracking of ballistic missiles after boost and through the midcourse phase of flight. Being a space-based sensor system, PTSS is not constrained by geographical considerations that affect the placement of ground-, air-, and sea-based radar systems. While the number of PTSS satellites to make up the constellation has not yet been determined, the system is expected to expand the Ballistic Missile Defense System’s (BMDS) ability to track ballistic missiles in the post-boost phase and plans to fill coverage gaps existing within the current BMDS radar configuration. According to PTSS officials, the constellation will provide coverage of some 70 percent of the earth’s surface with a minimum of six satellites. Furthermore, the enhanced coverage planned for PTSS would help increase the size of the missile raids that the BMDS can track and respond to. The PTSS program plans to launch its first two development satellites in the fourth quarter of fiscal year 2017 and to increase the constellation to nine satellites by 2022. The PTSS program plans to create a satellite constellation that can accommodate subsequent configuration adjustments. The program intends to create a flexible on-orbit and ground architecture that could accommodate such changes as an increase to the constellation size or changes to the communications infrastructure. This flexibility would permit the system to evolve in response to changes in the threat environment. The PTSS program officially began as a new program in the second quarter of fiscal year 2011. Johns Hopkins University’s Applied Physics Laboratory (APL) is the lead system developer for PTSS. In this capacity, APL advises the PTSS program office on systems engineering and integration issues, while leading the other laboratories involved in the development effort. In early 2011, APL awarded six integrated system engineering team subcontracts to industry partners to provide manufacturing and producibility recommendations for the development of the PTSS initial article satellites: Raytheon, Northrop Grumman, Lockheed Martin, Ball Aerospace, Orbital Science, and Boeing. MDA’s decision to involve the laboratories in initial development work is an action that we have previously recommended for other space acquisition programs. During the course of 2011, the PTSS program made several schedule changes, in part due to budgetary issues. PTSS was scheduled to begin the Technology Development Phase in the fourth quarter of fiscal year 2011, but delayed it until the fourth quarter of fiscal year 2012. One of the key early analytical knowledge points, the establishment of mass raid engagement time windows, was also delayed from the fourth quarter of fiscal year 2011 to the first quarter of fiscal year 2012. Finally, the planned launch date for the first two initial satellites was delayed from the fourth quarter of fiscal year 2015 and is now planned for the fourth quarter of fiscal year 2017. The PTSS program also delayed the projected launch dates of production satellites for the PTSS constellation. According to the acquisition strategy report signed in January 2012, MDA plans to develop and acquire the satellites in three phases. First, the APL- led laboratory team will produce two lab-built development satellites. Second, an industry team, selected through open competition while the APL-led laboratory team is still in a development phase, will develop and produce two industry-built engineering and manufacturing development satellites. Third, there will be a follow-on decision for the industry team to produce additional satellites in a production phase. (See fig. 12.) The strategy acknowledges some concurrency but maintains that there are benefits to this approach. Under the plan, the industry team will be approved for production of long-lead items for the two development satellites, while the laboratory team is still working to complete the first two development satellites. The program intends that by engaging industry concurrently at this development stage, industry can influence the selection of parts and subsystems in a manner that will minimize the need for system design changes between the two laboratory development satellites and the two initial industry satellites. The program intends to conduct on-orbit checkout and testing of the two laboratory-produced development satellites prior to the decision to complete the assembly of the two industry-built development satellites. According to MDA, the approach aligns with several aspects of GAO’s acquisition best practices. The program will establish firm requirements before committing to production, it will ensure full and open competition, the development cycle will be less than 5 years, it has a simple payload design and can deploy larger numbers in the constellation and it is deferring advanced capabilities until a second spiral thereby limiting the technological development challenge for the initial satellites. According to program management officials, they have taken steps intended to mitigate cost, schedule, and performance risks. PTSS is being designed strictly for BMDS use, so the satellite payload is geared toward the BMDS missile tracking mission, with the objective of keeping the design as simple and stable as possible. Additionally, the acquisition strategy stipulates that PTSS will not duplicate functions found elsewhere in the BMDS, but instead will remain focused on the specific function for which it is being designed. The program aims to shorten its development schedule through the use of proven technologies with high technology readiness levels. According to PTSS program management officials, the use of currently available technologies helps to keep the PTSS design cost-effective. In addition, according to those officials, the government intends to acquire unlimited data rights, government purpose data rights, or both for the duration of the program, so that the government is not locked in with any particular contractor. Because the PTSS acquisition strategy was only recently developed, we had limited time to assess the strategy for this review. We intend to review this new strategy next year. Building developmental and engineering and manufacturing development satellites is a positive step. However, the strategy may enable decision makers to fully benefit from the knowledge to be gained and the risk reduction opportunity afforded through on-orbit testing of the lab-built satellites before committing to the industry-built developmental satellites. The industry-built development satellites will be under contract and under construction before on-orbit testing of the first two lab-built satellites can confirm that the design works as intended. Currently, the PTSS program office has not determined how many satellites will make up the PTSS constellation, though the program is progressing with a flexible approach toward the number of satellites in the constellation. The size of a full PTSS constellation would depend on factors that have yet to be determined, most specifically, the size of missile raid that the system would be expected to track. In fiscal year 2011, the program conducted physics-based analysis to demonstrate the system’s performance within the BMDS in handling a range of raid scenarios. The satellites for the PTSS constellation are expected to have a 5-year design life, though officials stated that they expect the operational life will exceed the 5 years. Relative to other military space programs, the PTSS satellite is intended to be a low-cost unit, which can be readily replaced as on-orbit units degrade over time. However, the full cost of development has not yet been determined, and it is currently unclear how many satellites will need to be replaced annually, as this will be determined by such factors as design life and the total number on orbit. The cost to launch a satellite into orbit can be very expensive, sometime exceeding $100 million or more. Because the full size of the constellation has also not yet been determined, the PTSS program is unable to estimate the anticipated full costs of the acquisition and operation of the system. In leveraging proven technologies with high technology readiness, many of the system’s technologies are in relatively high states of maturity for a program in this early stage of development. The program office has identified two PTSS critical technologies: the optical payload and the communications payload. Many of the underlying components for the optical and communications payloads have been demonstrated in an environment relevant to the conditions under which they will be employed in the PTSS satellites. However, certain key components of these critical technologies require further development to reach maturity, and until these key components mature, they reduce the overall technological maturity of the payloads. Program management officials stated that they plan to have both critical technologies in functional form by the time of the preliminary design review, which is scheduled for the end of fiscal year 2013. The high radiation environment in which the PTSS satellites will operate creates technical challenges for the development effort. The PTSS program has instituted risk reduction measures to address radiation risks pertinent to two technologies. For risk issues pertaining to the focal plane array, the PTSS’s risk mitigation efforts are on schedule, with two contracts having been awarded to explore manufacturing processes to address radiation hardness requirements for the satellites’ anticipated on- orbit environment. Radiation mitigation efforts are also required for the satellite’s star tracker, a component of the system’s guidance and control subsystem. The PTSS program plans to award contracts to several vendors in 2012 to evaluate options to address this concern. The PTSS development effort is benefiting from MDA’s two operational Space Tracking and Surveillance System (STSS) satellites, which were launched into orbit in 2009. BMDS test events involving STSS have been useful in providing key information to the PTSS program. According to PTSS officials, the success of STSS in the FTM-15 flight test conducted in 2011 served as a “proof of principle” for PTSS, as the event demonstrated multiple aspects of the PTSS concept of operations, such as the ability to provide data from which interceptor missiles could be remotely launched and directed toward a missile threat. The FTM-12 flight test in late 2011 repeated the positive results noted in FTM-15, with tracking sensors locking onto targets and successfully providing direction for the fired interceptors. The STSS tests are assisting the PTSS program office as it develops the system’s concept of operations. The Missile Defense Agency’s (MDA) Targets and Countermeasures program designs, develops, produces and procures missiles serving as targets for testing missile defense systems. The targets program involves multiple acquisitions covering the full spectrum of threat missile capabilities (separating and nonseparating reentry vehicles, varying radar cross sections, countermeasures, etc.) and ranges.have been used by MDA’s test program for years while others have been recently or are now being developed and can represent more complex threats. increased capability. At that time, MDA began work on the 72-inch diameter launch vehicle (LV)-2 target and the 52-inch diameter targets. When this approach proved more costly and less timely than expected, MDA suspended the 52-inch effort, focusing on the LV-2. Responding to congressional concern about these problems and our 2008 recommendations, MDA revised its acquisition approach in 2009, seeking to increase competition by returning to a multiple contract strategy with four separate target classes and a potential of four prime contractors. MDA completed the intermediate-range target contract award, which reduced target costs. However, as proposals for the new medium-range ballistic missile (MRBM) contract were submitted, the program determined that costs associated with this approach were higher than anticipated. Solicitations for the medium-range and the intercontinental classes of targets were then canceled, and MDA began the process of revising its acquisition strategy for the third time. In the past, we have reported that availability and reliability of targets caused delays in MDA’s testing of Ballistic Missile Defense System (BMDS) elements. However, in fiscal year 2011, MDA delivered 11 targets, all of which were successfully launched and did not negatively affect the test program. The targets launched during the year supported tests of several different BMDS elements, including Ground-based Midcourse Defense (GMD), Aegis Ballistic Missile Defense, and Patriot systems. these deficiencies were satisfactorily addressed when the target missile was successfully extracted from the rear of the C-17 aircraft in FTX-17. To reduce risk, the flight was not planned as an intercept mission but as a target of opportunity for several emerging missile defense technologies, including Space Tracking Surveillance System. According to MDA and Director, Operational Test & Evaluation test officials, the availability of targets has affected planned future flight tests. MDA has scheduled the first two extended medium-range ballistic missiles (eMRBM) to launch in a crucial operational flight test (FTO-01) by the end of 2012, which is the first system-level test of the BMDS. On a tight schedule to meet this deadline, MDA is accepting higher risk that target issues could affect this test by launching the first two of the new targets in this operational test, rather than conducting a risk reduction flight first. Risk reduction flight tests are conducted the first time a system is tested in order to confirm that it works before adding other test objectives. The lack of such a test was one factor that delayed a previous GMD flight test (FTG-06) in 2010. While the target, the LV-2, was successfully flown in that flight test, aspects of its performance were not properly understood and lack of modeling data prior to the test contributed to significant delays in the test program. In addition, the next air-launched target test was scheduled to use the new medium-range extended air launched target in 2012, but the flight test—FTT-13—was cancelled because of budgetary concerns and test efficiency. As a result, the first flight test using this target is not planned until the third quarter of fiscal year 2014, though it may be available for use as early as the fourth quarter of fiscal year 2012. Since the short- range air-launched target was successfully launched in July 2011, MDA now plans to continue acquisition of the one short-range and the two extended air-launched targets that are currently under contract through fiscal year 2014. from inventory and used for an earlier test and be replaced by newer missiles. The Targets and Countermeasures program made several key decisions in fiscal year 2011 that will shape future target acquisition. Two key contracts were definitized in 2011; the eMRBM contract in October 2011, and an intermediate-range ballistic missile (IRBM) target contract in March 2011. MDA realigned funding planned for the medium-range competition, which was canceled in 2010, to manufacture additional IRBM targets. MDA canceled the planned intercontinental ballistic missile (ICBM) competition because the new test plan delays the need for the first ICBM target by several years. Finally, MDA issued an undefinitized contract action to the prime contractor for reentry vehicles. One overall consequence of these decisions has been a consolidation of work with the prime contractor. (See table 5.) An agreement on price was reached for the production of five eMRBM targets in September 2011. MDA began developing the eMRBM for operational use in 2003 as part of the Flexible Target Family when it was referred to as the 52-inch target. Though development and production had been on hold since 2008 because of continuing cost and schedule problems, MDA resumed acquisition of eMRBMs through the existing prime contractor due to a target failure. The production contract was definitized in October 2011 after being undefinitized for about 540 days. The Defense Federal Acquisition Regulation states that undefinitized contract actions shall provide for definitization by the earlier of either, 180 days after issuance of the action or the date on which more than 50 percent of the not-to-exceed price has been obligated. The 180-day threshold may be extended but may not exceed the date that is 180 days after the contractor submits a qualifying proposal. MDA program officials stated that because MDA continued to change the requirements on the undefinitized contract action, the contractor did not submit a qualifying proposal until March 2011. MDA definitized the contract approximately 194 days after receiving the proposal. During the 18-month delay, while the contract was being negotiated and requirements continued to change, the contractor spent over $82 million, the quantity of targets under contract increased, and some capability was deferred to later years. The final negotiated price at completion was $321 million, $175 million less than the previously expected price ceiling. MDA contracting officials acknowledged that undefinitized contract actions can lead to undefined costs, but believe they are a good tool to use to meet urgent requirements. accelerated in the test plan to the first quarter of fiscal year 2014. T3s are unique targets designed for more specialized maneuvers in their respective ranges. Second, an action for a foreign military asset target to meet a fourth quarter of fiscal year 2012 requirement. Third, an action for eight common reentry vehicles, which will replace earlier ones. MDA set up a common components project office to manage the acquisition strategy for the reentry vehicles, which are intended for flight tests in mid-2014. They have the potential to fly on any target launch vehicle, but the program is still developing more specific acquisition plans. In 2011, MDA began implementing its third acquisition strategy for targets by acquiring common reentry vehicles from a single source, a significant change in the acquisition strategy for the program office. Reentry vehicles for targets were previously acquired separately, were more specifically tailored to the target launch vehicle, and were procured from more than one contractor. The single-source strategy implemented with the 2011 undefinitized contract action is intended to maximize commonality and could reduce costs through purchasing larger numbers. Through 2013, the single source will be the targets prime contractor. MDA plans to decide in the second quarter of fiscal year 2012 whether to issue a competitive solicitation for a new provider. Appendix XI: Terminal High Altitude Area Defense (THAAD) THAAD is a rapidly deployable ground-based system designed to defend against short- and medium-range ballistic missile attacks during their late midcourse and terminal stages. A THAAD battery consists of interceptor missiles, six launchers, a radar, a fire control and communications system, and other support equipment. The program is producing batteries for initial operational use for conditional materiel release to the Army. For this to occur, the Army must certify that the batteries are safe, suitable, and logistically supported. The date for full materiel release has not yet been determined because the program is still conducting flight tests to prove out the system, and production rates have been slower than planned. Appendix XI: Terminal High Altitude Area Defense (THAAD) assessment of the THAAD system. In addition, the Director, Operational Test and Evaluation, will also independently evaluate the operational effectiveness of the system. The assessment of this event will support upcoming production and fielding decisions. The Missile Defense Agency (MDA) awarded a contract for THAAD’s first two operational batteries in December 2006, before its design was mature and developmental testing of all critical components was complete. At that time, MDA’s first THAAD battery, consisting of 24 interceptors, 3 launchers, and other associated assets, was to be delivered to the Army as early as 2009. While some assets were delivered by this time, the interceptors were delayed because of issues with components that had not passed all required testing. In response to pressure to accelerate fielding the capability, THAAD adopted a highly concurrent development, testing, and production effort, as shown in figure 13, that has increased program costs and delayed fielding of the first THAAD battery until early fiscal year 2012. Appendix XI: Terminal High Altitude Area Defense (THAAD) Appendix XI: Terminal High Altitude Area Defense (THAAD) A production contract was signed in 2006 before the requirements or design for a required safety device called an optical block was complete. Housed in the flight sequencing assembly, an optical block is an ignition safety device designed to prevent inadvertent launches of the missile. The program experienced design and qualification issues with this component until testing was complete in the fourth quarter of fiscal year 2011. Incorporating an optical block device into the THAAD interceptor has been a primary driver of design, qualification, and production delays for the program since as early as 2003, shortly after the Army issued a standard requirement for this type of safety device on munitions ignition systems. The original THAAD design did not have an optical block device, and MDA did not modify the development contract to include this requirement until 2006. Program management officials explained that the military standard is primarily written for smaller, more typical, munitions’ fuses, not systems as technically complex as THAAD. According to program management officials, THAAD has worked with the Army to tailor requirements and associated testing required of the optical block device during the past few years. The part failed initial qualification testing in early fiscal year 2010 and was not fully qualified until that September. Also, in May 2010, the Army added requirements to test the flight sequencing assembly during exposure to electrical stress and other environments, such as extreme temperature, shock, humidity, and vibration. Testing failures led THAAD to make minor design changes and extensive manufacturing process changes, which required requalification of the optical block and delayed production of the interceptors. Environmental testing was complete in March 2011, but the stress test was not completed until September 2011—after the first interceptor was produced. As recently as fiscal year 2011, the program was considering further design changes to the optical block to make it more producible; however, the program estimated that the cost to make the needed design changes would be $150 million, an investment that could not be easily recouped in production savings in the near future. Program mangers decided not to make those changes because of improved flight sequencing assembly and optical block manufacturing performance, and program funding constraints. The current design was also successfully demonstrated in the recent flight test and in the other testing in support of conditional materiel release. Therefore, the program determined that the benefits of continuing the redesign no longer justified the cost. Appendix XI: Terminal High Altitude Area Defense (THAAD) Production issues have collectively delayed interceptor delivery by 18 months and are projected to cost the program almost $40 million. While issues with the flight sequencing assembly have been the most costly, three production start-up issues emerged in fiscal year 2011 that also caused delays. First, the program encountered problems with the availability of a solution containing nitrogen needed for production. Program management officials explained that since all of the liquid could not be extracted out of a newly designed bottle, due to unanticipated design changes in the delivery mechanism, more had to be ordered before production could continue, which caused the delay. Another production delay of over a month took place because of debris found in a transistor on the interceptor. Program management officials explained that a root cause analysis determined that the part had not undergone proper testing, which would have detected such debris. The transistors had to be replaced with properly tested parts. A third delay occurred because ragged, raised edges were discovered inside several of the fuel tanks. According to program management officials, in the unlikely event that a small metal edge broke off during pressurization of the fuel tank, it could cause an interceptor failure. They said that after conducting a risk analysis, the program decided to remove the rough edges on future procurements, but not on the first 50 interceptors, since the possibility of such risk was low. The interceptor’s flight sequencing assembly is currently being produced at or above the expected rate of about four per month. Due to start-up issues, which are common to new production lines, interceptor production rates have fluctuated, ranging anywhere from 0 to 5 in recent months. Also, some recent production rates could be artificially high as delays with some components have allowed others more time than usual to stockpile for future production. These stockpiles are projected to help with production through the second battery. The program needs to achieve a steady production rate in order to deliver the second THAAD battery by July 2012. After this date, the contractor is scheduled to return to a rate of 3 interceptors per month. Appendix XI: Terminal High Altitude Area Defense (THAAD) that three flight sequencing assembly units complete a series of tests to evaluate the interceptor in various electrical and other stressing environments. By the end of fiscal year 2011, all these tests had been successfully completed. While THAAD has performed all test events required for conditional materiel release, including its most recent flight test (FTT-12), analysis of data is ongoing and the Army is still refining its requirements for full materiel release. Program management officials expect the gap in knowledge between conditional materiel release and full materiel release to be defined in second quarter of fiscal year 2012 as well. At that time, they explained, the Army will have developed a list of the remaining conditions that the program must address in order to receive full materiel release. One of the conditions that must be met to achieve full materiel release of THAAD to the Army is the incorporation of the required Thermally Initiated Venting System, a safety feature of the interceptor that prevents the boost motor from becoming propulsive or throwing debris beyond a set distance in the event that the canister holding the interceptor heats up to a certain temperature. Development and testing of this system has been done concurrently with production of fielded interceptors. Even if the latest design and near-term testing is successful, the system will be approved too late to be incorporated in the first 50 interceptors. Although the system is not required for conditional materiel release, the program expects it to be required for full materiel release, unless the Army grants a waiver. Since the last two developmental tests of this safety feature have failed, THAAD is at risk of not complying with the requirement. The next test is scheduled for the second quarter of fiscal year 2012. According to program management officials, if it fails, the program will be forced to seek a waiver for the current design and accept the risk of not having the design on the interceptors. Program management officials explained that the requirement for a Thermally Initiated Venting System is primarily written for smaller-scale systems, not for a system as large as THAAD. Although officials said they are working to comply with the requirement, the technology may not be available to make it work. At best, the program could not incorporate the safety system into the interceptor until production of the third battery. The Army has approved fielding the first 48 interceptors configured without the safety system based on available testing and it has chosen to accept the associated risk. Appendix XI: Terminal High Altitude Area Defense (THAAD) While MDA is committed to producing four THAAD batteries, more flight tests are needed to achieve two remaining MDA developmental knowledge points set for the program. Both are tied to flight tests that were, at one time, planned for fiscal year 2011 but were delayed into later fiscal years. MDA’s knowledge points identify information required to make key decisions throughout the program and are typically defined early in the acquisition phase to manage program risks. Although success of the first operational test increases confidence in THAAD, we have reported that good acquisition outcomes require high levels of knowledge before significant decisions are made. The building of knowledge consists of information that should be gathered at critical points over the course of a program before committing to production. To achieve the first remaining MDA knowledge point, THAAD must conduct an integrated flight test against a medium-range ballistic missile target. This test was originally scheduled for the second quarter of fiscal year 2011, but after an air-launched target failure in December 2009 and subsequent target availability issues, the agency moved the test to the third quarter of fiscal year 2012. Later in fiscal year 2011, the test was cancelled altogether because of budgetary concerns and test efficiency. The agency now plans to test the objective in the first BMDS operational test (FTO-01) in late fiscal year 2012. This test is not only planned as the first against a medium-range target for THAAD, but it will also be the first flight of the newly developed extended medium-range ballistic missile target. Assuming several new “firsts” during this high-level operational test poses significant additional risk for the agency and for achieving the knowledge point. The second knowledge point is to demonstrate THAAD’s Army Navy /Transportable Radar Surveillance - Model-2 radar advanced discrimination in terminal mode. This knowledge point was delayed from the first quarter of fiscal year 2010 into the fourth quarter of fiscal year 2011 because of the same 2009 target issue. However, this knowledge point was not accomplished in 2011 either. Additional changes to the flight test plan in 2011 moved this objective to a flight test scheduled for the third quarter of fiscal year 2013. As THAAD continues to gather data from these developmental flight tests, the program continues to concurrently produce interceptors, launchers, and associated equipment for operational use. As a result, the program is at risk for discovering new information that could lead to costly design changes and a need to retrofit missiles either already in the production process or in inventory. In addition to the contact named above, David B. Best, Assistant Director; Letisha J. Antone; Ivy Hübler; LaTonya Miller; Jonathan A. Mulcare; Kenneth E. Patton; John H. Pendleton; Karen Richey; Ann Rivlin; Luis E. Rodriguez; Steven Stern; Robert Swierczek; Hai V. Tran; and Alyssa Weir made key contributions to this report.
MDA has spent more than $80 billion since its initiation in 2002 and plans to spend $44 billion more by 2016 to develop, produce, and field a complex integrated system of land-, sea-, and space-based sensors, interceptors, and battle management, known as the BMDS. Since 2002, National Defense Authorization Acts have mandated that GAO prepare annual assessments of MDA’s ongoing cost, schedule, testing, and performance progress. This report assesses that progress in fiscal year 2011. To do this, GAO examined the accomplishments of the BMDS elements and supporting efforts and reviewed individual element responses to GAO data collection instruments. GAO also reviewed pertinent Department of Defense (DOD) policies and reports, and interviewed a wide range of DOD, MDA, and BMDS officials. In fiscal year 2011, the Missile Defense Agency (MDA) experienced mixed results in executing its development goals and Ballistic Missile Defense System (BMDS) tests. For the first time in 5 years, GAO found that all of the targets used in this year’s tests were delivered and performed as expected. None of the programs GAO assessed were able to fully accomplish their asset delivery and capability goals for the year. Flight test failures, an anomaly, and delays disrupted the development of several components and models and simulations challenges remain. Flight test failures forced MDA to suspend or slow production of three out of four interceptors currently being manufactured while failure review boards investigated their test problems. To meet the presidential 2002 direction to initially rapidly field and update missile defense capabilities as well as the 2009 announcement to deploy missile defenses in Europe, MDA has undertaken and continues to undertake highly concurrent acquisitions. Concurrency is broadly defined as the overlap between technology development and product development or between product development and production. While some concurrency is understandable, committing to product development before requirements are understood and technologies mature or committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. It can also create pressure to keep producing to avoid work stoppages. In contrast, as shown in the notional graphic below, successful programs that deliver promised capabilities for the estimated cost and schedule use a disciplined knowledge-based approach. High levels of concurrency were present in MDA’s initial efforts and are present in current efforts, though the agency has begun emphasizing the need to follow knowledge-based development practices. During 2011, the Ground-based Midcourse Defense, the Aegis Standard Missile 3 Block IB, and the Terminal High Altitude Area Defense experienced significant ill effects from concurrency. For example, MDA’s discovery of a design problem in a new variant of the Ground-based Midcourse Defense program’s interceptors while production was underway increased costs, may require retrofit of fielded equipment, and delayed delivery. Flight test cost to confirm its capability has increased from $236 million to about $1 billion. Because MDA continues to employ concurrent strategies, it is likely that it will continue to experience these kinds of acquisition problems. GAO makes seven recommendations to the Secretary of Defense to reduce concurrency and strengthen MDA’s near- and long-term acquisition prospects. DOD concurred with six recommendations and partially concurred with one related to reporting on the cause of the Aegis BMD Standard Missile-3 Block IB test failure before committing to additional purchases. DOD did not agree to tie additional purchases to reporting the cause of the failure. DOD’s stated actions were generally responsive to problems already at hand, but did not consistently address implications for concurrency in the future, as discussed more fully in the report.
Several factors constrain CNMI’s economic potential, including the lack of diversification, scarce natural resources, small domestic markets, limited infrastructure, and shortages of skilled labor. The United States exercises sovereignty over CNMI, and, in general, federal laws apply to CNMI. However, federal minimum wage provisions and federal immigration laws do not apply. CNMI immigration policies and the demands for labor by the garment manufacturing industry and tourism sector have resulted in rapid population growth since 1980 such that the majority of the population are non-U.S. citizens. (See fig. 1.) According to U.S. Census Bureau data for 2000, the most recent census data available, about 56 percent of the CNMI population of 69,221 were not U.S. citizens. According to U.S. Census Bureau data for 2000, the median household income in CNMI was $22,898, a little more than half of the U.S. median household income of almost $42,000 for 2000. The percentage of individuals in poverty in 2000 was 46 percent, nearly four times the continental U.S. rate of 12 percent in that same year. CNMI’s economy depends on two industries, garment manufacturing and tourism, for its employment, production, and exports. These two industries rely heavily on a noncitizen workforce. This workforce represents more than three quarters of the labor pool that are subject to the CNMI minimum wage, which is lower than the U.S. minimum wage. The garment industry, for example, uses textiles and labor imported mostly from China. A 1999 study found that garment manufacturing and tourism accounted for about 85 percent of CNMI’s total economic activity and 96 percent of its exports. A 2005 estimate of CNMI’s gross domestic product (GDP) suggest that, in 2002, the garment industry contributed to roughly 40 percent of CNMI’s GDP and 47 percent of payroll. However, recent changes in trade laws have increased foreign competition for CNMI’s garment industry, while other external events have negatively impacted its tourism sector. Recent developments in international trade laws have reduced CNMI’s trade advantages, and the garment industry has declined in recent years. Historically, while garment exporters from other countries faced quotas and duties in shipping to the U.S. market, CNMI’s garment industry benefited from quota-free and duty-free access to U.S. markets for shipments of goods in which 50 percent of the value was added in CNMI. In recent years, however, U.S. agreements with other textile-producing countries have liberalized the textile and apparel trade. For example, in January 2005, in accordance with one of the 1994 World Trade Organization (WTO) Uruguay Round agreements, the United States eliminated quotas on textile and apparel imports from other textile- producing countries, leaving CNMI’s apparel industry to operate under stiffer competition, especially from low-wage countries such as China. According to a DOI official, more than 3,800 garment jobs were lost between April 2004 and the end of July 2006, with 10 out of 27 garment factories closing. U.S. Department of Commerce data show that the value of CNMI shipments of garments to the United States dropped by more than 16 percent between 2004 and 2005, from about $807 million to $677 million, and down from a peak of $1 billion in 1998--2000. In 2006, reported garment exports to the United States fell further, by an estimated 25 percent compared to 2005, with exports declining to an estimated $497 million. The reported level of shipments to the United States in 2006 was comparable to levels of sales in 1995--1996, prior to the significant build-up of the industry. (See fig. 2.) In December 2006, the largest and oldest garment factory closed. Given that the garment industry is significant to CNMI’s economy, these developments will likely have a negative financial effect on government revenue. For example, reported fees collected by the government on garment exports fell 37 percent from $38.6 million in 2000 to $24.4 million in 2005. CNMI’s tourism sector experienced a sharp decline in the late 1990s, and a series of external events have further hampered the sector. Tourism became a significant sector of economic activity in CNMI by the mid-1980s and continued to grow into the 1990s. Due to its proximity to Asia, Asian economic trends and other events have a direct effect on CNMI’s economy. For example, tourism in CNMI experienced a sharp decline in the late 1990s with the Asian financial crisis and due to the cancellation of Korean Air service to CNMI following an airplane crash on Guam in August 1997. (See fig. 3.) Visitors from Korea, the second largest source of tourists, decreased by 85 percent from 1996 to 1998. After a modest recovery in 2000, tourism faltered again with the September 11, 2001, terrorist attacks on the United States. In 2003, according to CNMI officials, tourism slowed—with a double-digit decline in arrivals for several months—in reaction to the SARS epidemic and to the war in Iraq. Tourism in CNMI is also subject to changes in airline practices. For example, Japan Airlines (JAL) withdrew its direct flights between Tokyo and Saipan in October 2005, raising concerns because roughly 30 percent of all tourists and 40 percent of Japanese tourists arrive in CNMI on JAL flights, according to CNMI and DOI officials. The Marianas Visitors Authority’s June 2006 data show that the downward trend in Japanese arrivals is not being offset by the growth in arrivals from other markets such as China and South Korea, with the total number of foreign visitors dropping from 43,115 in June 2005 to 38,510 a year later. At the same time, CNMI has experienced increased Chinese tourists in recent years, which offer the potential to reenergize the industry. The fiscal condition of CNMI’s government has steadily weakened from fiscal year 2001 through fiscal year 2005, the most recent year for which audited financial statements for CNMI were available. In addition, several indicators point to a severe financial crisis in fiscal year 2006. As shown in figure 4, CNMI’s reported governmental fund balance declined from a positive $3.5 million at the beginning of fiscal year 2001 to a deficit of $84.1 million by the end of fiscal year 2005, as CNMI’s expenditures for its governmental activities consistently exceeded revenues in each year since fiscal year 2002. Most of CNMI’s governmental activities, which include basic services such as public safety, health care, general administration, streets and parks, and security and safety, are reported in its governmental activities, or government funds. The fund balance (or deficit) for these activities reflects the amount of funds available at the end of the year for spending. A significant contributing factor to the gap between expenditures and revenues is that actual expenditures have exceeded budgeted expenditures each fiscal year during the period 2001 through 2005. Another measure of fiscal health is the measure of net assets for governmental activities, which represents total assets minus total liabilities. As shown in table 1, CNMI has experienced a negative trend in its balance of net assets for governmental activities, going from a reported positive $40.6 million balance at the end of fiscal year 2001 to a negative $38 million balance at the end of fiscal year 2005. The primary difference between the fund balance measure and net assets is that the net assets include capital assets and long-term liabilities, whereas the fund balance figure focuses on assets available for current period expenditures and liabilities that are due and payable in the current period. In order to finance its government activities in an environment where expenditures have exceeded revenues, CNMI has increased its debt and has not made the required contributions to its retirement fund. CNMI’s reported balance of notes and bonds payable has increased from $83 million in fiscal year 2002 to $113 million in fiscal year 2005, representing an increase of 36 percent. CNMI’s balance owed to its pension fund has increased from $72 million in 2002 to $120 million in 2005, representing an increase of 67 percent. CNMI has also been incurring penalties on the unpaid liabilities to the pension fund. The total amount of assessed penalties was $24 million as of September 30, 2005. As shown in figure 5, CNMI’s reported debt to assets ratio has increased significantly, from 89.8 percent in fiscal year 2002 to 113.5 percent in 2005. In other words, at the end of fiscal year 2005, CNMI owed $1.14 for every $1.00 in assets that it held. Although CNMI’s audited fiscal year 2006 financial statements are not yet available, indicators point to a severe fiscal crisis during fiscal year 2006. In a May 5, 2006 letter to the CNMI Legislative leaders, Governor Benigno R. Fitial stated that “the Commonwealth is facing an unsustainable economic emergency. . . . I regret to say that the nature and extent of these financial problems are such that there is no simple or painless solution.” CNMI has implemented several significant cost-cutting and restructuring measures during fiscal year 2006. For instance, in August 2006, CNMI enacted its Public Law No. 15-24 to implement “austerity holidays” consisting of bi-weekly furloughs, during which government employees are not paid and many government operations are closed. This measure was taken to help alleviate the financial crisis by saving millions of dollars in both personnel and operational costs. The measure declared unpaid holidays once per pay period for the remainder of fiscal years 2006 and 2007, reducing the government’s normal pay period to 72 hours every 2 weeks. In June of 2006, CNMI enacted Public Law No. 15-15 to authorize the CNMI government to suspend the government’s employer contributions to the retirement fund for the remainder of fiscal years 2006 and 2007. In addition, CNMI has passed laws to restructure loans among its component units, reform the rate of compensation for members of boards and commissions, increase the governor’s authority to reprogram funds, extend the date for full funding of the retirement fund’s defined benefit plan, and create a defined contribution retirement plan for government employees hired on or after January 1, 2007. These measures are immediate and dramatic, and are indicative of severe financial problems that will likely call for long-term solutions. CNMI has had long-standing financial accountability problems, including the late issuance of its single audit reports, the inability to achieve unqualified (“clean”) audit opinions on its financial statements, and numerous material weaknesses in internal controls over financial operations and compliance with laws and regulations governing federal grant awards. CNMI received a reported $65.6 million in federal grants in fiscal year 2005 from a number of federal agencies. The five largest federal grantors in 2005 for CNMI included the Departments of Agriculture, Health and Human Services, Interior, Homeland Security, and Labor. As a nonfederal entity expending more than $500,000 a year in federal awards, CNMI is required to submit a single audit report each year to comply with the Single Audit Act, as amended. Single audits are audits of the recipient organization—the government in the case of CNMI—that focus on the recipient’s financial statements, internal controls, and compliance with laws and regulations governing federal grants. One of the objectives of the act is to promote sound financial management, including effective internal controls, with respect to federal expenditures of the recipient organization. Single audits also provide key information about the federal grantee’s financial management and reporting and are an important control used by federal agencies for overseeing and monitoring the use of federal grants. For fiscal years 1997 through 2005, CNMI did not submit its single audit reports by the due date, which is generally no later than 9 months after the fiscal year end. CNMI’s late submission of single audit reports means that the federal agencies overseeing federal grants to CNMI did not have current audited information about CNMI’s use of federal grant funds. As shown in table 2, CNMI’s single audit submissions were significantly late for fiscal years 1997 through 2004. However, CNMI has made significant progress in 2005 by submitting its fiscal year 2005 single audit report less than 1 month late. Auditors are required by OMB Circular No. A-133 to provide opinions (or disclaimers of opinion, as appropriate) as to whether the (1) financial statements are presented fairly in all material respects in conformity with generally accepted accounting principles (GAAP) and (2) auditee complied with laws, regulations, and the provisions of contracts or grant agreements that could have a direct and material effect on each major federal program. The CNMI government has been unable to achieve unqualified (“clean”) audit opinions on its financial statements, receiving qualified opinions on the financial statements issued for fiscal years 1997 through 2005. Auditors render a qualified opinion when they identify one or more specific matters that affect the fair presentation of the financial statements. The effect of the auditors’ qualified opinion can be significant enough to reduce the usefulness and reliability of CNMI’s financial statements. CNMI has made some progress in addressing the matters that resulted in the qualified opinions on its financial statements for fiscal years 2001 through 2003. However, some of the issues continued to exist in 2004 and 2005. The auditors identified the following issues in fiscal year 2005 that resulted in the most recent qualified audit opinion: (1) inadequacies in the accounting records regarding taxes receivable, advances, accounts payable, tax rebates payable, other liabilities and accruals, and the reserve for continuing appropriations, (2) inadequacies in accounting records and internal controls regarding the capital assets of the Northern Marianas College, and (3) the lack of audited financial statements for the Commonwealth Utilities Corporation, which represents a significant component unit of CNMI. Auditors for CNMI also rendered qualified opinions on CNMI’s compliance with the requirements for major federal award programs from 1997 through 2005. In fiscal year 2005, the auditors cited noncompliance in the areas of allowable costs, cash management, eligibility, property management, procurement, and other requirements. CNMI has long-standing and significant internal control weaknesses over financial reporting and compliance with requirements for federal grants. Table 3 shows the number of material weaknesses and reportable conditions for CNMI for fiscal years 2001 through 2005. The large number and the significance of reported internal control weaknesses raise serious questions about the integrity and reliability of CNMI’s financial statements and its compliance with requirements of major federal programs. Furthermore, the lack of reliable financial information hampers CNMI’s ability to monitor programs and financial information such as revenues and expenses and to make timely, informed decisions. CNMI’s 13 internal control reportable conditions for fiscal year 2005, 9 of which were material weaknesses, indicate a lack of sound internal control over financial reporting needed to provide adequate assurance that transactions are properly recorded, assets are properly safeguarded, and controls are adequate to prevent or detect fraud, waste, abuse, and mismanagement. For example, one of the material internal control weaknesses that the auditors reported for CNMI’s government for fiscal year 2005 was the lack of audited fiscal year 2005 financial statements of the Commonwealth Utilities Corporation (Corporation), a significant component unit of CNMI. Because the Corporation’s financial statements were unaudited, the auditors could not determine the propriety of account balances presented in the financial statements that would affect CNMI’s basic financial statements. CNMI’s auditors also reported other significant material internal control weaknesses that have continued from previous years, such as improper tracking and lack of support for advances to vendors, travel advances to employees, liabilities recorded in the General Fund, and tax rebates payable. Due to the lack of detailed subsidiary ledgers and other supporting evidence, the auditors could not determine the propriety of these account balances. According to the auditors, the effect of these weaknesses is a possible misstatement of expenditures and related advances and liabilities, which also resulted in a qualification of the opinion on the fiscal year 2005 CNMI financial statements. Consequently, CNMI’s financial statements may not be reliable. As shown in table 3, auditors also reported 38 reportable conditions in CNMI’s compliance with requirements for major federal programs and the internal controls intended to ensure compliance with these requirements. Two of these reportable conditions were considered material weaknesses. One of the two material internal control weaknesses affecting compliance with federal programs reported for CNMI’s government for fiscal year 2005 included the failure to record expenditures for the Medical Assistance Program when they were incurred. Specifically, the auditors identified expenditures in fiscal year 2005 for billings from service providers for services rendered in previous years. The effect of this weakness is that expenditures reported to the grantor agency, the U.S. Department of Health and Human Services, are based on the paid date and not, as required, the service date. In addition, actual expenditures incurred during the year are not properly recorded and, therefore, current year expenditures and unrecorded liabilities are understated. The other material weakness affecting compliance related to the lack of adherence to established policies and procedures for managing and tracking property and equipment purchased with federal grant funds. As a result, CNMI’s government was not in compliance with federal property standards and its own property management policies and procedures. The other 36 reportable conditions concerned compliance with requirements regarding allowable costs; cash management; eligibility; equipment and property management; matching, level of effort, and earmarking; procurement and suspensions and debarment; reporting; subrecipient monitoring; and special tests and provisions that are applicable to CNMI’s major federal programs. In CNMI’s corrective action plan for fiscal year 2005, CNMI officials agreed with almost all of the auditors’ findings. According to its fiscal year 2005 corrective action plan, CNMI is working to get a current audit of its component unit, the Commonwealth Utilities Corporation. Other planned actions include properly reconciling advances to vendors; reviewing travel advance balances and making adjustments as needed, including making payroll deductions if expense vouchers are not filed timely; implementing procurement receiving procedures for prepaid items; making necessary corrections to its automated tax system to enable auditors to better review tax returns; determining the correct balances for construction projects; implementing controls over verifying eligibility for Medicaid and restricting access to the related data; and ensuring proper completion of inventories. The plan provides that most of the findings will be addressed by the end of fiscal year 2007. It is important to note however, that many of the auditors’ findings, particularly those categorized as material weaknesses, are longstanding findings going back in some cases to 1987. OIA has ongoing efforts to support economic development in CNMI and assist CNMI in addressing its accountability issues. OIA has in the last 3 years sponsored conferences in the United States and business- opportunity missions in the insular areas to attract American businesses to the insular areas. The main goal of these efforts is to facilitate interaction and the exchange of information between U.S. firms and government and business officials from the insular areas to spur new investment in a variety of industries. Innovative projects such as setting up a production and mass mailing facility in CNMI aimed at the Japanese market are reported to be underway. OIA’s efforts in helping to create links between the business communities in the United States and CNMI are key to helping meet some of the economic challenges. In our recent report, we concluded that the insular areas would benefit from formal periodic OIA evaluation of its conferences and business-opportunity missions, including assessments of the cost and benefit of its activities and the extent to which these efforts are creating partnerships with businesses in other nations. In our December 2006 report, we recommended that OIA conduct such formal periodic evaluations to assess the effect of these activities on creating private sector jobs and increasing insular area income. OIA agreed with our recommendation. DOI’s OIA and IG, other federal inspectors general, and local auditing authorities assist or oversee CNMI’s efforts to improve its financial accountability. OIA monitors the progress of completion and issuance of the single audit reports as well as providing general technical assistance funds to provide training for insular area employees and funds to enhance financial management systems and processes. DOI’s IG has audit oversight responsibilities for federal funds in the insular area. To promote sound financial management processes in the insular area government, OIA has increased its focus on bringing the CNMI government into compliance with the Single Audit Act. For example, OIA created an incentive for CNMI to comply with the act by stating that an insular area cannot receive capital funding unless its government is in compliance with the act or has presented a plan, approved by OIA, that is designed to bring the government into compliance by a certain date. In addition, OIA provides general technical assistance funds for training and other direct assistance, such as grants, to help the insular area governments comply with the act and to improve their financial management systems and environments. The Graduate School of the U.S. Department of Agriculture (USDA) has been working with OIA for over a decade through its Pacific Islands and Virgin Islands Training Initiatives (PITI and VITI) to provide training and technical assistance. OIA staff members make site visits to CNMI as part of its oversight activities. In our December 2006 report, we recommended that OIA develop a standardized framework for its site visits to improve the effectiveness of its monitoring. We also recommended that OIA develop and implement procedures for formal evaluation of progress made by the insular areas to resolve accountability findings and set a time frame for achieving clean audit opinions. OIA agreed with our recommendations and noted that it had already made some progress during fiscal year 2006. Establishing a routine procedure of documenting the results of site visits in a standard framework would help ensure that (1) all staff members making site visits are consistent in their focus on overall accountability objectives and (2) OIA staff has a mechanism for recording and following up on the unique situations facing CNMI. CNMI faces daunting economic, fiscal, and financial accountability challenges. CNMI’s economic and fiscal conditions are affected by its economy’s general dependence on two key industries. In addition, although progress has been made in improving financial accountability, CNMI continues to have serious internal control and accountability problems that increase its risk of fraud, waste, abuse, and mismanagement. Efforts to meet formidable fiscal challenges in CNMI are exacerbated by delayed and incomplete financial reporting that does not provide officials with the timely and complete information they need for effective decision making. Timely and reliable financial information is especially important as CNMI continues to take actions to deal with its fiscal crisis. OIA has ongoing efforts to assist CNMI in addressing its accountability issues and to support economic development in CNMI. OIA officials monitor CNMI’s progress in submitting single audit reports, and OIA provides funding to improve financial management. Yet, progress has been slow and inconsistent. The benefit to CNMI of past and current assistance is unclear. Federal agencies and CNMI have sponsored and participated in conferences, training sessions, and other programs to improve accountability, but knowing what has and has not been effective and drawing the right lessons from this experience is hampered by a lack of formal evaluation and data collection. Strong leadership is needed for CNMI to weather its current crisis and establish a sustainable and prosperous path for the future. During 2006, the CNMI government took dramatic steps to reverse prior patterns of deficit spending. The CNMI government will need to continue to work toward long-term sustainable solutions. A focused effort is called for in which direct and targeted attention is concentrated on the challenges facing CNMI, with feedback mechanisms for continuing improvement to help CNMI achieve economic, fiscal, and financial stability. OIA plays a key role in this effort. In its comments on our December 2006 report, OIA pointed out that it provides “a crucial leadership role and can provide important technical assistance” to help CNMI and the other insular areas improve their business climates, identify areas of potential for private sector investment, and market insular areas to potential investors. It also noted that improving accountability for federal financial assistance for CNMI and other insular areas is a major priority. OIA has stated its commitment to continuing its comprehensive approach and to implementing other innovative ideas to assist CNMI and the other insular areas in continuing to improve financial management and accountability. Leadership on the part of the CNMI government and OIA is critical to addressing the challenges CNMI faces and to providing long-term stability and prosperity for this insular area. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you and other Members of the Committee may have at this time. For further information about this testimony, please contact Jeanette Franzel, Director, Financial Management and Assurance at (202) 512-9471 or [email protected], or David Gootnick, Director, International Affairs and Trade at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. The following individuals made important contributions to this report: Norma Samuel, Cheryl Clark, Anh Dang, Meg Mills, Maxine Hattery, and Emil Friberg, Jr. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. insular area of the Commonwealth of the Northern Mariana Islands (CNMI) is a self-governing commonwealth of the United States that comprises 14 islands in the North Pacific. In a December 2006 report--U.S. Insular Areas: Economic, Fiscal, and Financial Accountability Challenges (GAO-07-119)--regarding four insular areas including CNMI, GAO identified and reported the following: (1) economic challenges, including the effect of changing tax and trade laws on their economies; (2) fiscal condition; and (3) financial accountability, including compliance with the Single Audit Act. The Chairman of the Senate Committee on Energy and Natural Resources, which requested the December 2006 report, asked GAO to present and discuss the results as they pertain to CNMI. Our summary and conclusions are based on our work performed for our December 2006 report on U.S. insular areas. For this testimony we also had available CNMI's fiscal year 2005 audited financial statements, which we have included in our review, along with some recent developments in fiscal year 2006. The Commonwealth of the Northern Mariana Islands (CNMI) faces serious economic, fiscal, and financial accountability challenges. CNMI's economy depends heavily on two industries, garment manufacturing and tourism. However, recent changes in U.S. trade law have increased foreign competition for CNMI's garment industry, while other external events have negatively affected its tourism sector. CNMI's garment industry has declined in recent years with factory closings and reduced production. The value of garment shipments to the United States dropped by more than 16 percent between 2004 and 2005 and by an estimated 25 percent in 2006. Tourism in CNMI declined sharply in the late 1990s as a result of a series of external events, including the Asian financial crisis; cancellation of Korean Air service; and fears of international crises such as the SARS epidemic, terrorism, and the Iraq war. In 2005, Japan Airlines withdrew direct flights to the capital. The fiscal condition of CNMI's government has steadily weakened from fiscal year 2001 through fiscal year 2005, as government spending has exceeded revenues each year since 2002. CNMI ended fiscal year 2005 with a deficit of $84.1 million in its governmental fund balance. CNMI's liabilities also exceed its assets for its primary government. Indicators point to a severe financial crisis in fiscal year 2006. In response, the CNMI government has implemented cost-cutting and restructuring measures, including "austerity holidays," consisting of biweekly furloughs during which government workers are not paid and many government operations are closed to reduce personnel and operating costs. CNMI's long-standing financial accountability problems include the late submission of financial audit reports, inability to achieve "clean" opinions in its financial statements by the independent financial auditors, and reports showing serious internal control weaknesses over financial reporting. Many of the auditors' findings are longstanding, going back in some cases to 1987. Federal agencies and CNMI have sponsored and participated in conferences, training sessions, technical assistance, and other programs to improve CNMI's economy, fiscal condition, and accountability. During 2006, the CNMI government took steps to reverse its prior patterns of deficit spending. It will need to continue to work toward long-term sustainable solutions, with concentrated attention on the challenges facing the islands and feedback mechanisms for continuing improvement. Leadership on the part of the CNMI government and the Department of the Interior's Office of Insular Affairs is critical to providing long-term stability and prosperity for this U.S. insular area.
Because of the abundance of coal and its historically low cost, coal-fueled electricity generating units provide a large share of the electricity produced in the United States. In 2012, according to Energy Information Administration (EIA) data, there were 1,309 coal-fueled generating units in the United States, with a total of 309,680 megawatts (MW) of net summer generating capacity—about 29 percent of the total net summer generating capacity in the United States.produced by using other fossil fuels, particularly natural gas and oil; nuclear power; and renewable sources, including hydropower, wind, geothermal, and solar. Historically, coal-fueled generating units have provided about half of the electricity produced in the United States—an amount that has declined in recent years, falling to 37 percent in 2012. In addition to coal, electricity is To address concerns over air pollution, water resources, and solid waste, several environmental laws, including the Clean Air Act, Clean Water Act, and Resource Conservation and Recovery Act, were enacted. As required or authorized by these laws, EPA recently proposed or finalized four key regulations that will affect coal-fueled units. As outlined in table 1, these regulations are at different stages of development and have different compliance deadlines. These four regulations have potentially significant implications for public health and the environment. In particular, EPA projected that, among other benefits, CSAPR would reduce SO emissions by over half in covered states, reducing asthma and related human health impacts. In addition, EPA projected that MATS would reduce mercury emissions by 75 percent from coal-fueled electricity generating units, reducing the impacts of mercury on adults and children. In addition to these four regulations, on June 2, 2014, EPA proposed new regulations to reduce carbon dioxide emissions from existing fossil-fueled generating units that, if finalized, will impact the electricity industry, including coal-fueled generating units, aiming for overall reductions equivalent to 30 percent from 2005 emissions levels by 2030. The proposed regulations include state-specific goals for carbon dioxide emissions and guidelines for states to follow in developing, submitting, and implementing plans to achieve these goals, which would be due in June 2016, although, under some circumstances, a state may submit an initial plan by June 2016 and a completed plan up to 2 years later. In addition to DOE, FERC, and EPA, other key stakeholders have certain responsibilities for overseeing actions power companies take in response to the regulations and have a role in mitigating some potential adverse implications. These other stakeholders include state environmental and electricity regulators and system planners that coordinate planning decisions regarding transmission and generation infrastructure to maintain the reliable supply of electricity to consumers. System planners and operators attempt to avoid reliability problems through advance planning of transmission and, in some cases, generation resources, and coordinating or determining operational decisions such as which generating resources are operated to meet demand throughout the day. The role of a system planner can be carried out by individual power companies or RTOs. System planners’ responsibilities include analyzing expected future changes in generation and transmission assets, such as the retirement of a generating unit; customer demand; and emerging reliability issues. For example, once a power company notifies the system planner that it is considering retiring a generating unit, the system planner generally studies the electricity system to assess whether the retirement would cause reliability challenges and identify long- or short-term solutions to mitigate any impacts. The solutions could include building new generating units, reducing demand in specific areas, building new transmission lines or adding other equipment. DOE, EPA, and FERC have taken initial steps to implement the recommendation we made in our July 2012 report that these agencies develop and document a formal, joint process to monitor industry progress in responding to the four EPA regulations. Since that time, DOE, EPA, and FERC have taken initial steps collectively and individually to monitor industry progress responding to EPA regulations including jointly conducting regular meetings with key industry stakeholders. However, recent and pending actions on the four existing regulations, as well as EPA’s recently proposed regulations to reduce carbon dioxide emissions from existing generating units may require additional monitoring efforts, according to DOE, EPA, and FERC officials. DOE, EPA, and FERC have taken initial steps to implement the recommendation we made in our July 2012 report. In that report we found the agencies had undertaken individual monitoring efforts of varied scale and scope and engaged in informal coordination, but lacked a formal documented process for routinely monitoring industry progress toward compliance with the regulations. As such, we recommended that these agencies develop and document a formal, joint process to monitor industry progress in responding to EPA regulations. We concluded that such a process was needed until at least 2017 to monitor the complexity of implementation and extent of potential effects on price and reliability. Since that time, DOE, EPA, and FERC have taken initial steps collectively to monitor industry progress responding to EPA regulations including jointly conducting regular meetings with key industry stakeholders. Currently, these monitoring efforts are primarily focused on industry implementation in regions with a large amount of capacity that must comply with the MATS regulation—the only one of the four regulations that has taken effect. According to EPA officials, DOE, EPA, and FERC officials have met three times since our July 2012 report to coordinate the efforts under way at each agency to monitor industry’s progress implementing the MATS regulation and other related issues, including EPA’s development of recently proposed regulations to reduce carbon dioxide emissions from existing generating units. In addition, in May 2013, staff from DOE, EPA, and FERC jointly developed a coordination memorandum that was intended to identify how the agencies would work together to address the potential effects of EPA’s regulations on reliability. According to one EPA official, the memorandum was intended to be an evolving document that the agencies would revisit as appropriate, for example, as additional EPA regulations are finalized. These four RTOs include PJM Interconnection, which serves all or parts of Delaware, Illinois, Indiana, Kentucky, Maryland, Michigan, New Jersey, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia and the District of Columbia; Midcontinent ISO, which serves parts of Arkansas, Illinois, Indiana, Iowa, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Montana, North Dakota, South Dakota, Texas, and Wisconsin, as well as the Canadian province of Manitoba; the Southwest Power Pool, which serves parts of Arkansas, Kansas, Louisiana, Mississippi, Missouri, Nebraska, New Mexico, Oklahoma, and Texas; and the Electric Reliability Council of Texas, which serves parts of Texas. control equipment in use and retrofit plans, and other information such as reliability assessments under way in the region. As part of these meetings, officials told us that the RTOs provided information of varying levels of detail to the agencies, including information on retirement notifications and associated impacts as determined by the reliability studies completed by the RTOs; the status and findings of reliability assessments they conduct; data on the generating capacity of units with planned, announced, or completed retirements and retrofits; and data on planned outages. RTO officials told us they each gathered information about the plans for generating units in the areas they oversee. Officials from several RTOs told us that they gathered this information by surveying owners of generating units to identify, among other things, information on decisions related to retiring or retrofitting specific generating units. According to EPA officials, the agencies’ monitoring and technical assistance efforts are primarily focused on implementation of the MATS requirements because it has taken effect and includes requirements that must be achieved within well-defined time frames. The MATS regulation was finalized in February 2012 and calls for a 3-year compliance period for existing generating units with the deadline of April 16, 2015, but permitting authorities may provide an extra year for certain generating units that request additional time to comply. Agency officials and stakeholders told us that state agencies are generally providing the 1- year extension for generating units—providing these units a total of 4 years to comply. In addition, according to the National Association of Clean Air Agencies (NACAA), as of May 2014, all but 9 of over 100 requests for extensions were granted by the state permitting agencies. In addition to the MATS extension, EPA also provided a mechanism to allow certain units—generating units that are needed to address specific and documented reliability concerns—to request an additional year to come into compliance through the use of Clean Air Act administrative orders—which, if granted, would provide a total of 5 years to comply. According to EPA officials, compliance with the MATS requirements has been less challenging for industry than anticipated, and operators have generally been able to undertake retrofits as part of scheduled maintenance outages; however, certain retrofits, such as the installation of a fabric filter will require additional or longer outages to be completed. According to EPA officials, whether a plant will need to schedule outages for retrofits will depend on a number of factors including the type of controls required for compliance. EPA officials told us they anticipate few administrative orders to be requested. However, if EPA receives a request for an administrative order, EPA has stated in its policy that it will rely on the advice and counsel of reliability experts, including FERC, to identify and analyze reliability risks, but EPA officials will make the final decision on these requests. In May 2012, FERC issued a policy statement detailing how it intends to provide advice to EPA on such requests. In addition to participating in the EPA-facilitated meetings with industry and reviewing information provided from the RTOs through those meetings, DOE, FERC, and EPA have taken other steps to individually monitor or support industry progress implementing EPA regulations. DOE. DOE is offering technical assistance to state public utility commissioners, generating unit owners and operators, and utilities on implementing the new and pending EPA regulations affecting the electric utility industry. Specifically, according to DOE officials and documents, DOE may provide technical information on cost and performance of the various retrofit control technologies; technical information on generation or transmission alternatives for any replacement power needed for retiring generating units; and assistance to public utility commissions regarding any regulatory evaluations or approvals they may have to make on utility compliance strategies. According to agency officials, while DOE offers technical assistance on implementing new and pending EPA rules, DOE has received limited requests for such assistance. EPA. According to EPA officials, EPA has conducted outreach to ensure state agencies understand their ability to provide MATS extensions and EPA officials also review information from NACAA on the status of MATS extension requests. In addition, EPA has updated its power sector modeling tool—a model EPA uses to analyze the impact of policies, regulations, and legislative proposals on the power sector—to reflect MATS requirements along with changes in other market conditions. FERC. FERC officials told us that they monitor information from several sources including the NERC reliability assessments,capacity additions, and information from NACAA on the status of MATS extension requests. In addition, FERC obtained industry information on EIA data on reliability challenges through a technical conference that it convened to obtain information on the effect of recent cold weather events on the RTOs. Recent and pending actions on the four existing regulations, as well as EPA’s recently proposed regulations to reduce carbon dioxide emissions from existing generating units, may require additional agency effort to monitor industry’s progress in responding to the regulations and any potential impacts on reliability. DOE, EPA, and FERC officials told us that, in light of these changes, their coordination efforts may need to be revisited. Specifically, one EPA official noted that the agencies may need to reexamine their coordination efforts, as appropriate, in light of changing conditions, including newly proposed EPA regulations. In addition, according to FERC officials, since not all the regulations have been finalized, conditions will continue to change, making continued monitoring of potential reliability or resource adequacy challenges important. Furthermore, in April 2014, a FERC Commissioner testified before Congress about concerns and uncertainty related to potential reliability and price impacts associated with environmental regulations.Specifically, the Commissioner expressed concerns about the reliability of data on which generating units are retiring and the resources to replace those retiring generating units and called for a more formal review process including FERC, EPA, and others to analyze the specific details of retiring units, as well as the new units and new transmission that will be needed to manage the transition and ensure reliability of the nation’s electricity sector. RTO officials and other industry stakeholders also told us that recent and pending actions on regulations could have impacts on the industry’s ability to reliably deliver electricity. Officials from several RTOs told us that, while widespread reliability concerns are not anticipated, some regions may face reliability challenges including challenges associated with increasing reliance on natural gas. Officials from several RTOs said that their efforts to monitor reliability impacts will include evaluating the recently proposed regulations to reduce carbon dioxide emissions, which may present challenges in the future. In addition, officials from one RTO told us that compliance with new and proposed EPA regulations and an evolving generation portfolio will have significant effects on the industry’s ability to reliably deliver electricity. Officials from this RTO reported that their region is forecasting shortfalls in its reserve margin—additional capacity that exceeds the maximum expected demand to provide for potential backup—in some areas. In addition, these RTO officials and industry stakeholders noted that retirement of coal-fueled generating units may lead to increasing reliance on natural gas, as these generating units are replaced with natural gas fueled generating units, which will require construction of new pipeline and storage infrastructure. As a result, according to officials from one RTO, their region has increased coordination with the natural gas industry through a stakeholder forum and a series of gas infrastructure studies. These officials said that, while relying on natural gas to generate electricity has not historically negatively affected reliability, greater reliance on natural gas may require more consideration of potential fuel-related future reliability challenges. RTO officials and other industry stakeholders also told us recent and pending actions on regulations could have impacts on electricity prices. For example, industry stakeholders told us that the retirements that are occurring or planned are significant and could lead to increased electricity rates in some regions. In addition, as we reported in July 2012, the studies we reviewed estimated that increases in electricity prices could vary across the country, with one study projecting a range of increases from 0.1 percent in the Northwest to an increase of 13.5 percent in parts of the South more dependent on electricity generated from coal. Officials from several RTOs told us that, while they analyze the potential reliability impacts of specific generating units that power companies are considering retiring, they do not analyze the potential market impacts of these retirements on electricity prices or other market factors. In addition, several RTO officials told us they cannot estimate the impacts of these potential retirements on the markets due to the number of factors involved in determining market prices and affecting markets. Based on our discussions with agency officials, FERC, DOE, and EPA are not evaluating the potential impacts of planned retirements or retrofits on electricity prices as part of their monitoring efforts. However, EPA officials told us it uses its power sector modeling tool to analyze the potential impact of new regulations on economic factors including electricity prices and has used the tool to examine the potential impact of the new carbon rule that reflected publicly announced retirements and retrofits at the time of its analysis. According to EPA’s analysis for the recently proposed regulations to reduce carbon dioxide emissions from existing generating units, it projected an increase in the national average retail electricity price between 5.9% and 6.5% in 2020 compared with its base case estimate. According to our analysis, power companies plan to retire a greater percentage of coal-fueled net summer generating capacity and retrofit less capacity with environmental controls than the estimates we reported in July 2012. Specifically, our analysis indicates that power companies retired or plan to retire about 13 percent of coal-fueled net summer generating capacity (42,192 MW) from 2012 through 2025, which exceeds the estimates of 2 to 12 percent of capacity we reported in 2012. In addition, power companies have planned or completed some type of retrofit on about 70,000 MW of net summer generating capacity to reduce SO, NO, or particulate matter from 2012 through 2025, which is less than estimates we reported in 2012. In addition to our analysis of publicly announced retirements and retrofits, RTO officials told us that power companies may take additional steps and provided information on generating units that owners may take steps to retire or retrofit; specifically, about 7,000 MW of additional capacity from 46 generating units may be retired from 2012 through 2025, beyond what we identified in our analysis of SNL data. According to our analysis of SNL data, planned retirements of coal-fueled generating units appear to have increased and are above the high end of the estimates we reported in July 2012. Specifically, power companies retired or plan to retire about 13 percent of coal-fueled net summer generating capacity (42,192 MW from 238 units) from 2012 through 2025. When we reported in July 2012, projections suggested that 2 to 12 percent of coal-fueled capacity may be retired. Based on our analysis of SNL data, power companies retired 100 coal-fueled units from January 2012 to May 2014 with a total of 14,887 MW net summer generating capacity. In addition, based on our analysis of SNL data, power companies have reported plans to retire an additional 138 coal- fueled units with a total of 27,306 MW of net summer generating capacity from June 2014 through 2025. Another recent review also identified higher projected retirements of coal-fueled capacity than estimates we reported in July 2012. Specifically, in April 2014, EIA projected that retirements from 2012 through 2020 could reach approximately 50,000 MW or about 16 percent of net summer generating capacity available at the end of 2012. Consistent with the reasons we had reported for retirements in 2012, some stakeholders we interviewed said that some of these projected retirements may have occurred without the environmental regulations. Specifically, these stakeholders noted that several industry trends may be contributing to the retirement of coal-fueled generating units, including relatively low natural gas prices, increasing prices for coal, and low expected growth in demand for electricity. In addition, in June 2012, we reported that operators of some coal-fueled generating units had entered into agreements with EPA to retire or retrofit units to settle EPA enforcement actions. However, we also reported in July 2012 that, according to some stakeholders, the new environmental regulations may accelerate retirements because power companies may not want to invest in retrofitting units with environmental controls for those units they expect to retire soon for other reasons. About three-quarters of the retirements we identified in our analysis of SNL data are expected to occur by the end of 2015, corresponding to the initial April 2015 MATS compliance deadline (see fig. 1). This level of retirements is significantly more retirements than have occurred in the past; for example, according to our analysis, between 2000 and 2011, 150 coal-fueled units with a total net summer generating capacity of 13,786 MW have been retired. According to our analysis of SNL data, the units that power companies have retired or plan to retire are generally older, smaller, and more polluting, and this is generally consistent with what we reported in October 2012. In addition, we found that many of the units that companies have retired or plan to retire are those that are not used extensively and are geographically concentrated, with some exceptions. Specifically, we found the following: Older. Generating units that power companies have retired or plan to retire are generally older. The fleet of operating coal-fueled units was built over many decades, with most of the capacity currently in service built in the 1970s and 1980s. In particular, from 2012 through 2025, power companies retired or plan to retire about 80 percent of net summer generating capacity from units that were placed in service prior to 1970 (33,419 MW from 213 of the 238 units). However, SNL data indicate that power companies retired or plan to retire some newer generating units, including one generating unit placed into service in 2008. Smaller. Generating units that power companies have retired or plan to retire are generally smaller. Smaller generating units are generally less fuel efficient than larger units and can be more expensive to retrofit, maintain, and operate on a per-MW basis. In particular, smaller units—those less that 300 MW—comprise about 63 percent of the net summer generating capacity that power companies retired or plan to retire from 2012 through 2025 (26,659 MW from 208 of the 238 units). However, some larger generating units are also planned for retirement. In particular, according to our analysis, power companies retired 4 generating units with a net summer generating capacity of over 300 MW from 1990 to 2012, and they retired or plan to retire about 30 such generating units from 2012 through 2025. More polluting. Generating units that power companies retired or plan to retire over the next 3 years emit air pollutants such as SO at generally higher rates than the remaining fleet. According to our analysis, units that were retired or are planned for retirement from 2014 through 2017 emitted on average almost three times as much SO per unit of fuel used at the generating unit in 2013 as units that are not planned for retirement. Similarly, units that were retired or are planned for retirement from 2014 through 2017 emitted on average about 41 percent more NO per unit of fuel used at the generating unit in 2013 than units not planned for retirement. Not used extensively. Most generating units that power companies have retired or plan to retire have not been extensively used in recent years, but other units were used more often. Specifically, according to our analysis, from 2012 through 2025, power companies retired or plan to retire units that comprise about 70 percent of the net summer generating capacity (30,000 MW from 186 of the 238 units) that operated the equivalent of less than half of the hours they were available over the past few years. However, data also indicate that about 13 of the 238 units that companies retired or plan to retire— which represent about 4,200 MW of net summer generating capacity—operated the equivalent of 70 percent or more of the hours they were available over the past few years. Geographically concentrated. Generating units that power companies have retired or plan to retire are concentrated in certain states (see fig. 2). Specifically, about 38 percent of the net summer generating capacity that power companies retired or plan to retire from 2012 through 2025 is located in four states—Ohio (14 percent), Pennsylvania (11 percent), Kentucky (7 percent), and West Virginia (6 percent). In particular, figure 2 shows how completed or planned retirements from 2012 through 2025 are distributed nationwide and how these are concentrated in certain areas. According to our analysis of SNL data, completed or planned retrofits of coal-fueled generating units include less capacity than estimates we reported in July 2012. These retrofits include the use of a wide range of the technologies we reported at that time. As noted in our July 2012 report, operators of generating units were expected to rely on the combined installation of several technologies to comply with the regulations. These technologies include: (1) fabric filters or electrostatic precipitators to control particulate matter; (2) flue gas desulfurization units—also known as scrubbers—or dry sorbent injection units to control SO and acid gas emissions; (3) selective catalytic reduction or selective noncatalytic reduction units to control NO; and (4) activated carbon injection units to reduce mercury emissions. Appendix I includes a description of these controls, how they operate, and their potential capacity to remove pollutants. that power companies have either installed or expect to install a scrubber—generally intended to reduce SO—on about 34,000 MW of net summer generating capacity from 2012 through 2025, an effort that we reported in July 2012 has typically been costly and can take some time to complete. In addition, about 20,000 MW have completed or planned to complete a retrofit to reduce particulates, including about 17,000 MW with completed or planned installations of fabric filters known as “baghouses.” By comparison, in July 2012, we reported that several studies forecasted the steps generating unit owners would take to retrofit units. In particular, EPA estimated that, in response to MATS, companies would retrofit 102,000 MW of generating capacity with fabric filters and 83,000 MW with new scrubbers or scrubber upgrades. In addition, a study by NERC, which collectively examined early versions of all four regulations in 2011, estimated that 576 units that account for about 234,371 MW of capacity would be retrofitted by the end of 2015. We identified two key characteristics of the units that power companies have retrofitted or plan to retrofit as follows: Larger. Most of the net summer generating capacity that have completed or plan to complete a retrofit—about 68 percent—is at larger units with capacity greater than 500 MW. Geographically concentrated. A large share of the net summer generating capacity that has completed or plan to complete a retrofit—about 36 percent—is composed of generating units located in four states: Illinois, Indiana, Kansas, and Texas. In addition, some states have completed or plan to complete more retrofits than others. In particular, seven states (Kansas, Louisiana, New Hampshire, New Mexico, Oregon, South Dakota, and Washington) have completed or plan to retrofit more than half of the net summer generating capacity located in that state. Based on information provided by RTOs, power companies may be considering retiring or retrofitting some additional generating units. In particular, RTO officials provided information on additional generating capacity that power companies have either announced plans to retire or retrofit, or are in the process of considering for a retirement or retrofit. In particular, RTOs identified about 46 coal-fueled generating units that account for about 7,000 MW of additional generating capacity that may be retired from 2012 through 2025, beyond what we identified in our analysis of SNL data. In addition, RTOs identified a total of 260 units that account for about 108,000 MW of generating capacity that have completed or may undertake a retrofit from 2012 through 2025, which may include the capacity identified in our analysis. The electricity sector is in the midst of a significant transition as power companies face decisions on the future of coal-fueled electricity generating units in light of new regulations and changes in the market, such as recent low prices for natural gas, and even though compliance deadlines for three of the regulations remain uncertain, power companies have already identified retirements beyond the range of estimates we reported in 2012. Reliable electricity remains critically important to U.S. homes and businesses and is itself reliant upon the availability of sufficient generating capacity. DOE, EPA, and FERC have taken initial steps to implement our recommendation to establish a joint process to monitor industry’s progress in responding to the four EPA regulations and other factors. However, stakeholders, including a FERC Commissioner, continue to express concerns about reliability and electricity prices. Furthermore, proposed regulations focused on reducing emissions of carbon dioxide from the electricity sector, when finalized, may pose additional challenges for coal-fueled generating units. The initial coordination efforts now under way across the three agencies are an important tool for understanding and monitoring the potential effects of EPA regulations and other factors on the electricity sector. However, consistent with our recommendation in 2012, careful monitoring and coordination by the federal agencies incorporating the views of other stakeholders such as RTOs will be even more important over the next several years as key regulations are finalized and implemented. We are not making new recommendations in this report. We provided a draft of this report to DOE, EPA, and FERC, for review and comment. In written comments from DOE, EPA, and FERC, reproduced in appendixes II, III, and IV respectively, the three agencies generally concurred with our analysis. The agencies stated that they will continue to monitor the progress of industry implementation of the regulations and coordinate with one another to address potential reliability challenges. Specifically, DOE stated that these coordination efforts have primarily focused on MATS and may be revisited as they work with industry to monitor compliance with other EPA regulations. EPA stated that it will monitor compliance with all of the rules, as appropriate, to ensure that reliability is not put at risk. FERC stated that it is working with industry to explore reliability issues stemming from new and pending environmental rules for the power sector, and that it will continue to monitor industry’s progress implementing these rules and will coordinate with DOE, EPA, and industry. We continue to believe it is important that these agencies jointly monitor industry’s progress in responding to the EPA regulations and fully document these steps as we recommended in 2012. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Administrator of the EPA, the Chairman of FERC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. How it works An induced electrical charge removes particles from flue gas. Fabric filter (commonly referred to as a “baghouse”) Flue gas passes through tightly woven fabric filter “bags” that filter out the particulates. Flue gas desulfurization unit (commonly referred to as a “scrubber”) Wet flue gas desulfurization units inject a liquid sorbent slurry, such as a limestone slurry, into the flue gas to form a wet solid that can be disposed of or sold. Dry flue gas desulfurization units inject a dry sorbent, such as lime, into the flue gas to form a solid byproduct that is collected. Coal combustion conditions are adjusted so less NO is formed. For SCR, ammonia is injected into flue gas to react with NO to form nitrogen (N) and water and uses a catalyst to enhance the reaction. For SNCR, ammonia or urea is injected into flue gas to react with NO as well, but does not use a catalyst. Activated carbon injection units Powdered activated carbon sorbent is injected into flue gas, binds with mercury, and is collected in particulate matter control device. In addition to the individual named above, Jon Ludwigson (Assistant Director), Janice Ceperich, Margaret Childs, Philip Farah, Quindi Franco, Cindy Gilbert, Richard Johnson, Armetha Liles, and Alison O’Neill made key contributions to this report.
EPA recently proposed or finalized four regulations affecting coal-fueled electricity generating units, which provide about 37 percent of the nation's electricity supply. These regulations are the: (1) Cross-State Air Pollution Rule; (2) Mercury and Air Toxics Standards; (3) Cooling Water Intake Structures regulation; and (4) Disposal of Coal Combustion Residuals regulation. In 2012, GAO reported that, in response to these regulations and other factors such as low natural gas prices, companies might retire or retrofit some units. GAO reported that these actions may increase electricity prices and, according to some stakeholders, may affect reliability–the ability to meet consumers' demand—in some regions. In 2012, GAO recommended that DOE, EPA, and FERC develop and document a formal, joint process to monitor industry's progress responding to these regulations. In June 2014, EPA proposed new regulations to reduce carbon dioxide emissions that will also affect these units. GAO was asked to update its 2012 report. This report examines (1) agencies' efforts to respond to GAO's recommendation and (2) what is known about planned retirements and retrofits. GAO reviewed documents, analyzed data, and interviewed agency officials and stakeholders. The Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Federal Energy Regulatory Commission (FERC) have taken initial steps to implement a recommendation GAO made in 2012 that these agencies develop and document a joint process to monitor industry's progress in responding to four proposed or finalized EPA regulations affecting coal-fueled generating units. GAO concluded that such a process was needed until at least 2017 to monitor the complexity of implementation and extent of potential effects on price and reliability. Since that time, DOE, EPA, and FERC have taken initial steps to monitor industry progress responding to EPA regulations including jointly conducting regular meetings with key industry stakeholders. Currently, these monitoring efforts are primarily focused on industry's implementation of one of four EPA regulations—the Mercury and Air Toxics Standards—and the regions with a large amount of capacity that must comply with that regulation. Agency officials told GAO that in light of EPA's recent and pending actions on regulations including those to reduce carbon dioxide emissions from existing generating units, these coordination efforts may need to be revisited. According to GAO's analysis of public data, power companies now plan to retire a greater percentage of coal-fueled generating capacity and retrofit less capacity with environmental controls than the estimates GAO reported in July 2012. About 13 percent of coal-fueled generating capacity—42,192 megawatts (MW)—has either been retired since 2012 or is planned for retirement by 2025, which exceeds the estimates of 2 to 12 percent of capacity that GAO reported in 2012 (see fig.). The units that power companies have retired or plan to retire are generally older, smaller, more polluting and not used extensively, with some exceptions. For example, some larger generating units are also planned for retirement. In addition, the capacity is geographically concentrated in four states: Ohio (14 percent), Pennsylvania (11 percent), Kentucky (7 percent), and West Virginia (6 percent). GAO's analysis identified about 70,000 MW of generating capacity that has either completed some type of retrofit to reduce sulfur dioxide, nitrogen oxides, or particulate matter since 2012 or plan to complete one by 2025, which is less than the estimate of 102,000 MW GAO reported in 2012. GAO is not making new recommendations but believes it is important that these agencies jointly monitor industry progress and fully document these steps as GAO recommended in 2012. The agencies concurred with GAO's findings.
The Safe Drinking Water Act established a federal-state arrangement in which states may be delegated primary implementation and enforcement authority (“primacy”) for the drinking water program. Except for Wyoming and the District of Columbia, all states and territories have received primacy. For contaminants that are known or anticipated to occur in public water systems and that the EPA Administrator determines may have an adverse impact on health, the act requires EPA to set a nonenforceable maximum contaminant level goal (MCLG) at which no known or anticipated adverse health effects occur and that allows an adequate margin of safety. Once the MCLG is established, EPA may set an enforceable standard for water as it leaves the treatment plant, the maximum contaminant level (MCL). The MCL generally must be set as close to the MCLG as is feasible using the best technology or other means available, taking costs into consideration. Alternatively, EPA can establish a treatment technique, which requires a treatment procedure or level of technological performance to reduce the level of the contaminant. The fact that lead contamination occurs after water leaves the treatment facility has complicated efforts to regulate lead in the same way as most other drinking water contaminants. In 1975, EPA established an interim MCL for lead of 50 parts per billion (ppb), but did not require sampling of tap water to show compliance with the standard. Rather, the standard had to be met at the water system before the water was distributed. The 1986 amendments to the act directed EPA to issue a new lead regulation, and in 1991, EPA adopted the Lead and Copper Rule. Instead of an MCL, the rule established an “action level” of 15 ppb for lead in drinking water. To reduce the amount of lead entering the water as it flows through distribution lines and home plumbing to customers’ taps, the rule required that water systems, if needed, treat the water to limit its corrosiveness. Under the rule, the action level is exceeded if lead levels are higher than 15 ppb in over 10 percent of tap water samples. Large systems, including WASA’s, generally must take at least 100 tap water samples in a 6- month monitoring period, though reduced monitoring schedules are also allowed for some systems. If a water system exceeds the action level, it has 60 days to deliver a public education program that meets EPA requirements, including a notice in customers’ water bills; delivery of public service announcements to television and radio stations; and the distribution of information to locations likely to serve populations vulnerable to lead exposure, including hospitals, clinics, and local welfare agencies. In addition, if lead levels exceed the action level after treatment to minimize water’s corrosiveness, the water system must annually replace 7 percent of the lead service lines under its ownership and offer to replace the private portion of the lead service line (at the owner’s expense) until the tap water 90th percentile lead levels drop below the action level for two consecutive six month monitoring periods. Drinking water is provided to District of Columbia residents under a unique organizational structure: The U.S. Army Corps of Engineers’ Washington Aqueduct draws water from the Potomac River that it filters and chemically treats to meet EPA specifications. The aqueduct produces drinking water and sells it to utilities that serve approximately 1 million people living or working in or visiting the District of Columbia; Arlington County, Virginia; and Falls Church, Virginia. Managed by the Corps of Engineers’ Baltimore District, the aqueduct is a federally owned and operated public water supply agency that produces an average of 180 million gallons of water per day at two treatment plants located in the District. The District of Columbia Water and Sewer Authority buys its drinking water from the Washington Aqueduct and distributes it through 1,300 miles of water mains to customers in the District and several federal facilities in Virginia. From its inception in 1938 until 1996, WASA’s predecessor, the District of Columbia Water and Sewer Utility Administration, was a part of the District’s government. In 1996, WASA was established by the District of Columbia as a semiautonomous regional entity. EPA’s Region III Office in Philadelphia has primary oversight and enforcement responsibility for public water systems in the District of Columbia. According to EPA, the regional office’s oversight and enforcement responsibilities include providing technical assistance to the water suppliers on how to comply with federal regulations; ensuring the suppliers report monitoring results to EPA by the required deadlines; taking enforcement actions if violations occur; and using those enforcement actions to return the system to compliance in a timely fashion. The District’s Department of Health, while having no formal role under the act, has as its mission identifying health risks and educating the public on those risks. In August 2002, WASA officially reported to EPA that drinking water in the District of Columbia exceeded the action level for lead. This report triggered the Lead and Copper Rule’s requirement to deliver a public education program within 60 days and to replace lead service lines at a minimum rate of 7 percent per year. Because WASA and property owners in the District share ownership of the water service lines, the rule required WASA to replace the portion of the lines that it owns, and to offer to replace the portion of the lines controlled by the homeowners at the homeowners’ expense. Under the Lead and Copper Rule, water systems get credit for lead service line replacement either by actually replacing lines or by finding homes with lead service lines that test under the 15 ppb action level. For fiscal year 2003, WASA decided to physically replace and test lead service lines concurrently. WASA reported that it tested 4,613 homes with lead service lines in fiscal year 2003, and found 1,241 homes at or below the 15 ppb action level but another 3,372 homes with water exceeding the action level. Local media made these results public in January 2004. EPA began a special audit of WASA’s compliance with the Lead and Copper Rule in February 2004. This audit resulted in a consent order that EPA and WASA signed on June 17, 2004. Congress held a number of hearings in 2004 to investigate drinking water problems in the District. WASA and other government agencies implementing the act’s regulations for lead have taken steps to improve their coordination. According to EPA officials, WASA has thus far met the terms of the order the two agencies signed that required WASA to take a number of corrective actions. WASA has also agreed to implement most recommendations that the D.C. Inspector General made in a January 2005 report to develop internal policies and procedures at WASA that would improve the coordination between EPA, WASA, and the D.C. Department of Health. Improved coordination, however, has not resolved all problems, and EPA and WASA officials remain concerned that drinking water WASA provides still exceeds the action level for lead of 15 parts per billion. Under the June 2004 Consent Order, WASA agreed to take several actions to improve its compliance with the Lead and Copper Rule and, in so doing, enhanced its coordination with EPA and the D.C. Department of Health. The order required WASA to improve its selection of sampling locations and reporting of water testing results to EPA, create a strategy to improve its public education efforts, physically replace an additional 1,615 lead service lines by the end of fiscal year 2006, develop a plan and a schedule to identify additional lead service lines, and, in collaboration with the D.C. Department of Health, develop a plan to set priorities for replacing lead service lines. According to staff in EPA’s Region III, WASA appears to be on track to meet the terms of the order. Table 1 identifies some principal requirements of the order and notes the status of WASA’s compliance as of January 18, 2005. WASA also agreed to implement 11 of the 12 recommendations contained in the D.C. Inspector General’s January 2005 report. The D.C. Inspector General found that WASA had not developed or maintained internal policies or procedures for implementing requirements set forth in the Lead and Copper Rule, including those for selecting and reporting lead water sample test results. However, the D.C. Inspector General concluded that WASA’s current initiatives on lead concentrations in the District’s tap water were noteworthy; he also made 12 recommendations to improve WASA’s annual monitoring, lead service line replacement, and communication. WASA agreed to all of the Inspector General’s recommendations except one to develop a memorandum of understanding (MOU) with the D.C. Department of Health that defines both agencies’ roles and responsibilities, the expert advice each agency can provide in the areas of water quality management, and the frequency and manner of transmission of information between the agencies. WASA did not agree that an MOU was necessary to ensure effective cooperation, and noted that its relationship with the D.C. Department of Health has vastly improved and reflects a more creative and flexible partnership and that the range of substantive issues around which WASA and the D.C. Department of Health must communicate is wide, diverse, and complex. While we agree that WASA’s relationship with the D.C. Department of Health has improved, we nonetheless agree with the Inspector General’s view that an MOU would serve to define the two agencies’ roles and responsibilities and help improve their coordination and partnership. Despite improved coordination, the central problem remains: lead in D.C. drinking water is still over the EPA action level. In February 2004, EPA formed a Technical Expert Working Group made up of representatives from WASA; EPA; CDC; the Washington Aqueduct; Arlington and Falls Church, Virginia; the D.C. Department of Health; and industry consultants. Industry experts traced the likely cause for the increased lead levels to November 2000. At that time, the Washington Aqueduct changed its secondary disinfectant treatment from free chlorine to chloramines to comply with a new EPA regulation that placed strict limits on disinfection by-products. This change in water treatment may have had the unintended consequence of making the corrosion control treatment that was in place no longer adequately protective. Therefore, lead levels increased in water exposed to lead-containing plumbing and fixtures. The group recommended the introduction of orthophosphate to the drinking water supply because it concluded that this chemical would form a protective coating inside lead service lines and fixtures to prevent lead from leaching into drinking water. In order to assess the effect of orthophosphate on the water distribution system, in May 2004, EPA approved the Washington Aqueduct’s request to apply the corrosion inhibitor to a portion of the District of Columbia drinking water distribution system, and the corrosion inhibitor was introduced June 2004. This portion is called the 4th High Pressure Zone, and it is hydraulically isolated from the remainder of the system. In early August 2004, based on the results of the partial system test, EPA approved the Washington Aqueduct’s request for broader use of the corrosion inhibitor, and on August 23, 2004, the inhibitor was introduced systemwide. On January 10, 2005, WASA submitted to EPA its latest tap water sampling results, covering tap water samples taken from July through December 2004. These results showed that the 90th percentile sample reached 59 ppb, still substantially over the 15 ppb action level for lead. However, EPA and WASA officials report that some reductions of lead levels occurred in the latter half of the monitoring period. WASA data show that 42 samples taken during July through September 2004 had a 90th percentile reading of 82 ppb, while 88 samples taken during October through December 2004 had a 90th percentile reading of 31 ppb. According to EPA, experts have said that it can take 6 months or more to begin seeing a drop in lead levels and a year or more for the orthophosphate treatment to reduce lead levels below the EPA action level. WASA is identifying those most at risk for exposure to lead in drinking water by updating its inventory of lead service lines. To reduce the exposure of District residents to lead in drinking water, WASA is accelerating its rate of lead service line replacement and providing priority replacement of lead service lines for populations particularly vulnerable to the health effects of lead. However, questions remain about the success of the lead service line replacement program, because WASA is replacing only part of the lead service line unless customers pay to have their portion replaced. WASA and EPA officials are focusing on lead service lines as the primary source of lead in drinking water in the District of Columbia. Locating these lines allows WASA to identify the people most likely to be exposed. The June 2004 consent order that WASA signed with EPA Region III requires WASA to update its baseline inventory of lead service lines each year. WASA must use this baseline inventory to calculate the 7 percent of lines it replaces each year. In September 2004, WASA revised its baseline inventory to 23,637 lead service lines and reported this number to EPA. However, at that time WASA did not know the composition of 31,380 service lines. The order requires WASA to provide a strategy and timetable for identifying the composition of these unknown lines. During fiscal year 2005, WASA plans to determine the composition of 1,200 unknown lines by digging up or testing a segment of each line. Figure 1 shows the inventory of WASA’s service lines as of October 1, 2004. To speed the process of identifying the composition of unknown lines, WASA is attempting to develop a methodology to identify the composition without physically digging up the line. WASA plans to statistically analyze line composition data from test pits dug in 2003 through 2005 along with known quantities about each excavated line: the date of service line construction, water test result for lead, and size of service line. WASA hopes that these known quantities can be used to determine the unknown line composition. WASA plans to complete this analysis by August 1, 2005. To reduce residents’ exposure to lead in drinking water, WASA is accelerating its schedule for replacing lead service lines. WASA’s Board of Directors decided to replace all lead service lines in public space in the District of Columbia by 2010. The total cost of this program is estimated at $300 million. In fiscal years 2002 through 2004, WASA replaced 2,229 lead service lines in public space, about 9 percent of the total known lead service line inventory. In its lead service line replacement program, WASA replaces the majority of lines on a block-by-block basis. However, to reduce exposure to lead in drinking water for those residents most vulnerable to lead’s health effects, WASA agreed, as part of the consent order, to develop in consultation with the D.C. Department of Health a system for setting priorities for lead service line replacement and to replace 1,000 lead service lines by the end of fiscal year 2006 on a priority basis. For fiscal year 2005, WASA’s first priority for replacement is homes with children younger than 6 who have elevated blood lead levels; its second priority is day-care centers; and its third priority is homes that are occupied by children younger than 6, or pregnant or nursing mothers. WASA identified members of this third group by sending a letter to all customers in its database who have a lead service line or a service line of unknown composition. Customers could return the letter to identify themselves as members of these at risk groups, as appropriate, and WASA sorted customer responses to remove those who did not meet the criteria for priority replacement. WASA worked with the D.C. Department of Health to establish criteria for priority replacement, and EPA has approved the program. Table 2 shows the number of priority replacements WASA completed in fiscal year 2004 and plans to complete in fiscal year 2005. WASA is replacing lead service lines in public space—from the water main to the homeowners’ property line. In the District of Columbia, homeowners own the portion of the service line that runs from the property line to the home. Homeowners may replace this portion of the line if they choose, but this replacement is not required. WASA can replace the private portion of a lead service line when it replaces its portion of the line. Figure 2 shows the configuration of a service line from the water main to a customer’s home. Experts disagree about the effectiveness of removing only part of a lead service line. Studies that EPA cited in the Lead and Copper Rule suggest that long-term exposure to lead from drinking water decreases when a service line is partially replaced. However, after partial replacement of a lead service line, exposure to lead in drinking water is likely to increase in the short term because cutting or moving the pipe can dislodge lead particles and disturb any protective coating on the inside of the pipe. Some experts believe that lead exposure can increase after partial service line replacement because of galvanic corrosion where the dissimilar metals of the old and new pipes meet. A study at WASA showed that partial lead service line replacement significantly reduced average lead levels, but that flushing was necessary to remove lead immediately after replacement. At an EPA conference on lead service line replacement in October 2004, water industry officials and others stressed the importance of encouraging or mandating full replacement of lead service lines. As the consent order required, WASA has established a program to encourage homeowners to replace their portion of lead service lines. This program includes a low-interest loan program for low-income residents, offered through a grants of up to $5,000 for low-income residents, offered by the District of Columbia Department of Housing and Community Development; and a fixed-fee structure for line replacement of $100 per linear foot plus $500 to connect through the wall of the home, to make pricing easier for homeowners to understand. WASA implemented this program in July 2004, and EPA approved the program on August 10, 2004. Information about these programs is included in the notice that homeowners receive at least 45 days before their lead service line is scheduled to be replaced. Thus far, few homeowners in the District of Columbia have replaced their portion of lead service lines. In fiscal years 2003 through 2004, only 2 percent of homeowners (48 of 2,217) replaced the private portion of their lead service line. WASA officials attribute the low rate of full line replacement to customers’ cost concerns. An EPA Region III official told us it is too early to determine if the District of Columbia’s program is increasing the number of customers who replace their portion of the service line, since the program went into place approximately 2 months before the end of fiscal year 2004. However, WASA officials told us that the number of full replacements is increasing since the program was implemented—14 percent of customers (119 of 841) replaced the private portion of their lead service line between October 1, 2004, and January 28, 2005. EPA has asked WASA to report on the number of customers taking advantage of the various incentive programs in the 2005 annual lead service line replacement report. Madison, Wisconsin, provides an alternative example for maximizing full lead service line replacement. A 1997 study showed that these lines were the source of elevated lead levels in water, and that fully replacing them could reduce lead levels to well below the action level. Madison cannot use orthophosphate corrosion control treatment because this treatment would degrade surface water quality in local lakes. In lieu of corrosion control treatment, the water utility is replacing all lead service lines in the city over 10 years, a total of approximately 6,000 service lines. To ensure that lines are completely replaced, Madison passed an ordinance in 2000 requiring homeowners to replace their portion of the lead service line when the utility replaces its portion. The city reimburses homeowners for half of the cost they incur in replacing their portion of the line, up to a maximum of $1,000. Assistance is available for customers who cannot afford the replacement. A Madison Water Utility official told us that before the ordinance was passed, less than 1 percent of customers paid to have their portion of the lead service line replaced. Other water systems use innovative methods to educate their customers about lead in drinking water. These practices include using a variety of media to inform the public, forming partnerships with government agencies and community groups, and targeting educational materials to the audience most susceptible to lead exposure through drinking water. These practices tend to go well beyond the provisions of the Lead and Copper Rule, which require public notification language that is difficult to understand and do not require utilities to notify individual homeowners of the lead concentrations in their homes’ drinking water. WASA’s experience highlights the importance of conducting an effective public education program. In its June 2004 consent order, EPA found that WASA had committed only a few violations of the public education requirements of the Lead and Copper Rule. However, community groups and others have criticized WASA for failing to adequately convey information to its customers about lead in drinking water and for failing to communicate a sense of urgency in the materials provided. As we testified in July 2004, EPA acknowledges that it should have provided better oversight of WASA’s public education program. Other water systems we contacted have used innovative approaches to educate the public about lead in drinking water. For example, some systems used a variety of media to inform the public. Officials from the Massachusetts Water Resources Authority (MWRA) appear for interviews on local radio and television talk shows to spread information about lead in drinking water. The Portland (Oregon) Water Bureau provides funding for many lead education initiatives, including materials presented to new parents in hospitals; billboard, movie, and bus advertisements targeted to neighborhoods with older housing; and education materials produced by the Community Alliance of Tenants to educate renters on potential lead hazards. Each of these materials directs people to call a telephone hotline to get information about all types of lead hazards. This hotline is operated by the Multnomah County Health Department and funded by the Portland Water Bureau. Water industry experts at an EPA conference in September 2004 stressed the importance of partnerships, particularly with health officials, in educating the public about lead in drinking water. Some water systems have already formed partnerships to better educate the public and provide a unified message. Three examples follow: MWRA provides training workshops on drinking water issues, including lead in drinking water, for local health officials. These officials can then educate the public about drinking water issues when they arise. MWRA also sends the local health department the same drinking water data that it sends to the state drinking water regulator, so local health officials are well informed. The Portland Water Bureau participates in an integrated program to educate the public and reduce exposure to all sources of lead, including drinking water. The water bureau’s partners in this program include the Multnomah County Health Department, the State Lead Poisoning Prevention Program, the Portland Bureau of Housing and Community Development, and community nonprofit agencies. The Lead and Copper Rule requires water systems that exceed the action level to provide written education materials to facilities and organizations that serve high-risk segments of the population, including people more susceptible to the adverse effects of lead and people at greater risk of exposure to lead in drinking water. Some water systems have gone beyond this basic requirement to better reach high-risk populations. For example, in January 2004, the Portland Water Bureau sent a targeted mailing of approximately 2,600 postcards to the homes of an age most likely to contain lead solder that it identified as having a child 6 years old or younger. These postcards encouraged residents to get their water tested for lead, learn about childhood blood lead screening, and reduce lead hazards in their homes. Water bureau officials said that they obtained the information needed to target the mailing from a commercial marketing company and that the information was inexpensive and easy to obtain. The rule specifies that educational materials be delivered to Women, Infants, and Children (WIC) and Head Start programs, where available. Both Portland and MWRA have cultivated relationships with these programs. MWRA worked with local WIC officials to add information about lead in drinking water to WIC’s postpartum program for new mothers, and to prepare an easy-to-understand brochure explaining how to avoid exposure to lead in drinking water. Portland funded efforts with Head Start to provide free blood lead testing and to present puppet shows teaching children how to avoid lead hazards. Table 3 shows how the Portland Water Bureau targets its lead education program to community groups. Some other water systems measure the impact of their public education programs. MWRA has conducted focus groups to judge the effectiveness of its public education program, and routinely refines the information presented about lead in drinking water. The Portland Water Bureau tracks calls received by its lead information hotline and surveys callers to determine their satisfaction with the program and the extent to which it changed their behavior. An official from St. Paul (Minnesota) Regional Water Services told us that the utility surveys its customers about water quality issues. During the time the utility was conducting public education about lead in drinking water, it surveyed customers each year to ask if customers believed they were receiving enough information about the quality of their water. Responding to concerns about the Lead and Copper Rule’s public education requirements, EPA conducted a workshop in September 2004 at which representatives from the water industry and community groups discussed their views of the rule’s requirements. Representatives from the water industry also told us they went beyond the rule’s requirements to ensure the success of their public education programs. At the EPA workshop and in interviews, water industry officials, experts, and community groups identified the following problems: The public cannot easily understand the required public education language. Representatives of several water utilities told us the required language was too long and the reading level too advanced for many customers to understand. One expert estimated that understanding the EPA language required at least an 11th grade reading level, while approximately half the adult population of the United States reads at an 8th grade level or lower. Water industry officials suggested customizing education materials about lead in drinking water for those who have limited reading ability. The rule does not require utilities to send results to homeowners whose water is sampled for lead compliance. EPA officials told us that many water systems do provide this information to customers, but in the past, WASA did not provide this information in a timely fashion. The consent order requires WASA to provide lead results to homeowners within 3 days of receiving the results from the laboratory, and encourages WASA to provide this data within 30 days of collecting the sample. Public notification under the rule is less timely than that required for other violations of the Safe Drinking Water Act. The rule requires a water system to notify the public within 60 days if it exceeds the action level for lead. Other violations of the Safe Drinking Water Act with the potential to cause serious adverse effects on human health require public notification within 30 days, including violations of MCLs and treatment techniques. EPA has not evaluated the effectiveness of the public education requirements of the rule since it was implemented in 1991. Water industry officials at the EPA workshop suggested several methods to evaluate the effectiveness of public outreach, including surveying the public to determine its knowledge of lead in drinking water issues and comparing the level of knowledge in areas where public education has and has not been conducted. These officials also suggested that EPA identify public education activities conducted by utilities around the country that are following EPA guidelines and doing additional voluntary education work to identify good practices. In response to elevated lead levels in the District of Columbia, EPA is conducting a national review of compliance and implementation of the Lead and Copper Rule, including its public education requirements. Additionally, EPA conducted the public education expert workshop to gain information to use in its deliberations about changing the Lead and Copper Rule and possibly its accompanying guidance documents and training. We support EPA’s efforts in re-evaluating the public education requirements of the rule, but believe that EPA also needs to provide more practical assistance that water systems can use when educating their customers about lead in drinking water. Much is known about the health effects of lead exposure, particularly lead’s impact on brain development and functioning in young children. However, according to experts we interviewed, limited studies have been conducted on the heath effects of exposure to low levels of lead in drinking water. Officials in EPA’s Office of Water and Office of Research and Development told us they are beginning to address certain information gaps about the health risks of lead in drinking water. However, the timetable for completing this effort is not clear. Health experts agree that lead is toxic to almost every organ system, and much research has documented its adverse health effects. While many body systems can be severely affected by high chronic and acute lead exposures, lead is dangerous in large part because moderate to low chronic exposure can result in adverse health effects. The threshold for harmful effects of lead remains unknown. Over the years, as new data has become available, CDC has revised its recommendations on the threshold of blood lead levels that should raise concern and trigger interventions. In 1975, CDC’s blood lead level threshold of concern stood at 30 micrograms per deciliter. In 1991, CDC lowered the blood lead level of concern to 10 micrograms per deciliter. Research conducted since 1991 provides evidence of adverse effects at even lower levels—at less than 10 micrograms per deciliter among children younger than 6. Because of their behavior and physiology, children are more sensitive than adults to exposure to lead in a given environment. For example, children generally come into more contact with lead because they spend more time on the ground, where there may be lead-contaminated soil or dust. Mouthing and hand-to-mouth behaviors also increase the likelihood that children may ingest soil or dust. Physiologically, children take in more food and water per pound of body weight, and their absorption of lead is estimated to be 5 to 10 times greater than adults. Finally, children are more sensitive than adults to elevated blood lead levels because organ systems, including their brain and nervous system, are still developing. This ongoing development increases the risk of lead’s entry into the brain and nervous system, and can result in prolonged or permanent neurobehavioral disorders. In contrast, most adult exposures to lead are occupational and occur in lead-related industries, such as lead smelting, refining, and manufacturing. Adults exposed to lead can develop high blood pressure, anemia, and kidney damage. Lead poses a substantial threat to pregnant women and their developing fetuses because blood lead readily crosses the placenta. Pregnant women with elevated blood lead levels may have an increased chance of miscarriage, premature birth, and newborns with low birth weight or neurologic problems. CDC tracks children’s blood lead levels in the United States through the National Health and Nutrition Examination Surveys and state and local surveillance data. The surveys between 1976 and 1980 found evidence of an estimated 88 percent prevalence of lead levels greater than or equal to 10 micrograms per deciliter in children aged 1 to 5 compared with an estimated prevalence of 2.2 percent in 1999 to 2000. Health experts generally attribute this decline to the elimination of leaded gasoline and lead solder from canned foods, and a ban on leaded paint used in housing and other consumer products. Data provided by the District of Columbia to CDC for 2001 show that, of an estimated 39,356 children younger than 6, 16,036 were tested for lead. Of those, 437, or 2.73 percent, had blood lead levels greater than or equal to 10 micrograms per deciliter. More recently, in response to the discovery of high lead levels in drinking water in the District of Columbia, CDC and the D.C. Department of Health studied blood lead levels of residents most at risk for lead exposure. This study was designed to determine the extent to which lead in drinking water was contributing to blood lead levels of District residents. One portion of the study focused on residents of homes with known lead levels in drinking water greater than 300 ppb, much greater than the EPA action level of 15 ppb. Health officials attempted to contact nearly all residents of homes with lead concentrations at this level, and collected blood samples for lead analysis from residents who agreed to the procedure. Of the 201 residents tested, all were found to have blood lead levels less than CDC’s levels of concern for adults or children, as appropriate. Another portion of this study examined blood lead data collected by the District of Columbia Department of Health’s blood lead surveillance system. Results of blood lead tests conducted from January 1998 through December 2003 were compared for a nonprobability sample of homes with known lead service lines and homes with nonlead service lines. During 2000 through 2003, the period when lead levels in drinking water increased, the number of people with blood lead levels greater than 5 micrograms per deciliter decreased for the sample without lead service lines but did not decrease in a statistically significant way for the sample with lead service lines. In the District of Columbia, blood lead levels are generally greater in homes with lead service lines. In general, the older homes most likely to have lead service lines are also those most likely to have other lead hazards, such as lead in paint and dust. A good deal of research has been conducted on the health effects of lead associated with certain pathways of contamination, such as the ingestion of lead paint and the inhalation of dust contaminated with lead. According to a number of public health experts, drinking water contributes a relatively minor amount to overall lead exposure in comparison with other sources. However, the most relevant studies on the isolated health effects of lead in drinking water date back nearly 20 years—including the Glasgow Duplicate Diet Study on lead levels in children, upon which the Lead and Copper Rule is partially based. While lead in drinking water is rarely thought to be the sole cause of lead poisoning, it can significantly increase a person’s total lead exposure— particularly for infants who drink baby formula or concentrated juices that are mixed with water from homes with lead service lines or plumbing systems. For children with high levels of lead exposure from paint, soil, and dust, drinking water is thought to contribute a much lower proportion of total exposure. For residents of dwellings with lead solder or lead service lines, however, drinking water could be the primary source of exposure. As exposure declines from sources of lead other than drinking water, such as gasoline and soldered food cans, drinking water will account for a larger proportion of total intake. Thus, according to EPA’s Lead and Copper Rule, the total drinking water contribution to overall lead levels may range from as little as 5 percent to more than 50 percent of a child’s total lead exposure. According to recent medical literature and the public health experts we contacted, the key uncertainties about the effects of lead in drinking water requiring clarification include the incremental effects of lead-contaminated drinking water on people whose blood lead levels are already elevated from other sources of lead contamination and the potential health effects of exposure to low levels of lead. EPA has acknowledged the need to improve health risk information available to drinking water systems and local governments about lead in drinking water. According to officials from EPA’s Office of Water, one way to improve this information would be to develop a health advisory for lead. EPA health advisories are written documents that provide information on the health effects, analytical methodology, and treatment technology that would be useful in dealing with the contamination of drinking water and have been issued for many other water contaminants, such as cryptosporidium (a water-borne microbe). The advisories serve as informal technical guidance to assist federal, state, and local officials responsible for protecting public health when contamination occurs. For example, a cryptosporidium health advisory was prompted, in part, by an outbreak of the microbe in 1993 in Milwaukee, Wisconsin, where an estimated 400,000 people became ill. Office of Water officials note that the agency currently does not have a health advisory for lead and believe the problems local District agencies had in communicating the health risks of lead in drinking water highlight the need for one. Office of Water officials also noted a health advisory document for lead would be useful for other water systems and state and local officials in communicating risk if they identify problems with lead during monitoring under the Lead and Copper Rule. In 1985, EPA drafted a health advisory for lead, but never issued it to the public. At present, EPA’s Office of Water has drafted a plan to prepare a lead health advisory and have it reviewed by experts within EPA and by external peer reviewers. However the anticipated completion date for the advisory has not been determined. To ensure that the health advisory for lead is up-to-date, the Office of Water also plans to produce a “white paper” that documents how research data were used in setting the action level for lead and updates that assessment using new data on lead exposure and uptake in the body. Officials in these offices told us that the white paper should provide sufficient information to allow health risk at the action level to be discussed in the lead health advisory. They told us that data used to develop the 15 ppb action level in the 1991 rule were based on a small group of studies published before 1989 and on early models of the agency’s Integrated Exposure Uptake Biokinetic Model for Lead (IEUBK), which predicts blood lead concentrations for children exposed to different types of lead sources. The Office of Research and Development is currently developing an “all ages lead model” that supplements the IEUBK model, and should allow for new predictions of fetal blood lead levels derived from maternal exposure levels. According to EPA, the agency plans to have the model peer reviewed first and any issues from the peer review addressed before the model is used in regulatory decision making. These predictions may be incorporated into the white paper being prepared by the Office of Water. However, a timetable for completing the updated model and the white paper has not been determined. Current draft plans for the health advisory and white paper neither discuss how these projects fit into a broader agency research agenda nor identify how they will be funded or if they need to be coordinated with CDC or other research organizations. In 2004, poor coordination among local District of Columbia agencies and EPA aggravated the problems they had in responding to elevated lead levels and communicating accurate and timely health risk information to affected District residents. Since that time, local agencies and EPA have improved their coordination. Nonetheless, these agencies still face considerable challenges in ensuring the safety of the District’s water supplies. For one thing, while lead levels have come down in recent months, they still remain well above the Lead and Copper Rule’s 15 ppb action level. In addition, only time will tell if or how quickly WASA’s ambitious lead service line replacement program will further lower lead levels in drinking water. The District’s experience has also exposed weaknesses in the Lead and Copper Rule’s public education requirements. EPA is collecting information about compliance with the rule and is also considering changes to the Lead and Copper Rule and its accompanying guidance documents and training. We support these efforts and believe the clear deficiencies of the rule’s public education requirements—vividly illustrated in the District of Columbia—call for action to assist water systems in educating their customers about lead. The District’s experience has also underscored gaps in available knowledge about health risks associated with lead-contaminated drinking water. In acknowledging these gaps, EPA has pointed to projects planned by its Office of Water and its Office of Research and Development as key steps to address the problem. However, the timetable for completing these projects is not clear, and it is also not clear how this work will fit into a broader research agenda or if this agenda will involve other key organizations such as CDC. To provide timely information to communities on how to improve communication of lead health risks, we recommend, as part of its comprehensive re-examination of the Lead and Copper Rule’s public education requirements, that the Administrator of EPA direct the Office of Water to identify and publish best practices that water systems are using to educate the public about lead in drinking water. To improve the health risk information on lead available to water systems and regulatory staff, we recommend that the Administrator of EPA develop a strategy for closing information gaps in the health effects of lead in drinking water that includes timelines, funding requirements, and any needed coordination with CDC and other research organizations. We provided a draft of this report to EPA for comment. In its March 14, 2005, letter (see app. II), EPA expressed appreciation for the information in the report, identified some of its recent and ongoing efforts to address the problems we identified, and indicated it will give full consideration to our recommendations. Of particular note, EPA agreed with our recommendation that the agency identify and publish best practices that water systems can use to educate their customers about lead in drinking water. EPA said it will work with its regions and water utility associations to identify best practices and disseminate them to a wide audience, and will work with stakeholders to change the mandatory language in its regulations to make sure it is relevant and understandable. The agency indicated neither agreement nor disagreement with our recommendation to develop a strategy for closing information gaps on the health risks of lead in drinking water. EPA noted instead it was awaiting revision of the agency’s exposure model for evaluating the effects of lead exposure from different media on blood lead levels. It also said it was “working to prepare a health advisory that would inform the discussion” and was developing a summary of toxicokinetic research published since 1991. EPA said these efforts should be completed later this year or early next year. We note that while EPA’s planned efforts to address information gaps in knowledge of health risks from lead in drinking water appear to be worthwhile activities, we continue to believe the agency should commit to the kinds of planning steps (such as budgeted resources and timetables) that will help to ensure its planned efforts are addressed in a timely manner and have their intended effect. We also continue to believe that EPA should coordinate its efforts with CDC and other parties to ensure that the most is achieved from all agencies’ collective efforts. EPA also provided technical comments and clarifications that have been incorporated, as appropriate. On February 23, 2005, we met with WASA officials to discuss the factual information we were planning to include in our draft report. At that time, WASA provided oral comments and technical suggestions. We subsequently provided the draft report to WASA for formal comment. WASA, however, did not comment on this draft. As agreed with your office, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; interested Members of Congress; the Acting Administrator, Environmental Protection Agency; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff need further information, please contact me at (202) 512-3841 or [email protected]. Individuals making key contributions to this report included Steve Elstein, Samantha Gross, Karen Keegan, Tim Minelli, and Carol Herrnstadt Shulman. To identify actions that key government entities are taking to improve coordination, we reviewed key documents, such as the consent decrees between the District of Columbia Water and Sewer Authority (WASA) and the Environmental Protection Agency (EPA) and testimony by the involved agencies, that identified steps each agency agreed to take to improve coordination, efficiency, and accountability. We then met with officials of these entities and gathered documentation from them to gauge the progress of planned changes. Additionally, we reviewed reports written by various groups about lead in drinking water in the District of Columbia, including reports by the District of Columbia Inspector General, the D.C. Appleseed Center for Law and Justice, and the law firm of Covington and Burling. Finally, to gain perspective on the issue of coordination, we interviewed officials from other water systems and their federal and state regulatory agencies and consulted with industry groups in the drinking water delivery field. To identify the extent to which WASA and others are gathering information to determine which adult and child populations are at greatest risk of exposure to lead, we reviewed WASA’s efforts to locate lead service lines. We also reviewed the plans that WASA has submitted to EPA to replace lead service lines and materials describing WASA’s program to encourage homeowners to fully replace lead service lines. We interviewed WASA and EPA staff about the progress of the lead service line identification and replacement programs, interviewed officials at other water systems to discuss lead service line replacement, and reviewed studies on partial lead service line replacement. To determine how other drinking water systems that have exceeded the action level for lead conducted public education and outreach, we met with parties knowledgeable about the Lead and Copper Rule, including EPA headquarters and regional staff and relevant industry groups, in part to find water systems with particularly innovative and effective public education and outreach programs. From this group, we focused on water systems in large cities with diverse populations that had exceeded the action level for lead since 2000, according to EPA data. We then interviewed officials from these water systems and reviewed documents to learn about their public education efforts, how they target their efforts, and how they measure success. We also spoke to officials from government and nongovernment entities that partner with these water systems in their education programs. To learn about public education under the Lead and Copper Rule, we attended an EPA workshop where water system managers, environmental and consumer groups, and other experts shared their opinions on best practices in the industry and EPA’s current policies. We also reviewed reports and public testimony pertaining to public education in the District of Columbia and elsewhere. To evaluate the state of research on lead exposure, we interviewed public health officials and academic researchers that representatives of government and nongovernmental organizations in the fields of drinking water and public health identified as experts on lead. We interviewed these experts to get their perspective on lead’s health effects, particularly the health effects of ingestion of low levels of lead and lead in drinking water. We also discussed data gaps on the health effects of lead, the research efforts planned and under way to fill these gaps, and alternative strategies that might better ensure that these gaps are addressed efficiently and effectively. These experts also helped us identify the medical and public health literature we reviewed on the health effects of lead exposure, particularly through drinking water. To learn about efforts to locate and monitor the blood lead levels of individuals exposed to elevated levels of lead in drinking water in the District, we examined a published study and interviewed officials at the District of Columbia Department of Health and the Centers for Disease Control and Prevention. Finally, we interviewed EPA officials and reviewed EPA strategic plans and other documentation to learn about EPA’s plans to address key information gaps on the health effects of lead exposure.
Media reports on elevated lead in the District of Columbia's drinking water raised concern about how local and federal agencies are carrying out their responsibilities. The Lead and Copper Rule requires water systems to protect drinking water from lead. The U.S. Army Corps of Engineers' Washington Aqueduct treats and sells water to the District Water and Sewer Authority (WASA), which delivers it to District residents. The Environmental Protection Agency's (EPA) Region III Office oversees these agencies. GAO examined (1) what agencies implementing the rule in the District are doing to improve their coordination and reduce lead levels, (2) the extent to which WASA and other agencies are identifying populations at greatest risk of exposure to lead in drinking water and reducing their exposure, (3) how other drinking water systems that exceed EPA's action level for lead conduct public education, and (4) the state of research on lead exposure and how it applies to drinking water. WASA and other government agencies have improved their coordination, but significant challenges remain. According to EPA officials, WASA has thus far met the terms of a June 2004 consent order by enhancing its coordination with EPA and the D.C. Department of Health. For example, WASA developed a plan to improve its public education efforts and collaborated with the department to set priorities for replacing lead service lines. EPA expects the August 2004 addition of a corrosion inhibitor to eventually reduce lead in drinking water, though it may take more than one year for full improvements to be observed. Tap water test results reported in January 2005 show that D.C. drinking water still exceeds the standard for lead. WASA is identifying those customers most at risk from exposure to lead in drinking water and reducing their exposure. WASA is focusing on lead service lines as the primary source of lead in drinking water. It is updating its inventory of lead service lines, accelerating its rate of service line replacement, and providing priority replacement for customers most vulnerable to lead's health effects. However, questions remain about the success of the replacement program because, by law, WASA can only pay to replace the portion of the service line that it owns. Homeowners may pay to replace their portion of the service line, but few homeowners chose to do so in 2003 and 2004. Other water systems use innovative methods to educate their customers and to judge the effectiveness of their efforts. These practices include using a variety of media to inform the public, forming partnerships with government and nonprofit agencies, and targeting and adapting information to the audiences most susceptible to lead exposure through drinking water. Many of these practices go well beyond the requirements of the Lead and Copper Rule. In this connection, water industry representatives and others noted several shortcomings with the rule's public education provisions, including confusing language and the lack of a requirement to notify homeowners of the specific lead levels in their drinking water. Additionally, EPA has not evaluated water systems' public education efforts on lead in drinking water since the rule was established more than a decade ago. Much is known about the health effects of lead exposure, particularly its impact on brain development and functioning in young children. However, limited studies have been conducted on the health effects of exposure to low levels of lead in drinking water. EPA plans to prepare a health advisory document to help utilities explain the risks of lead exposure to the public, and a paper summarizing lead research conducted since the Lead and Copper Rule was published in 1991. However, the timetable for these projects is not clear, and it is also not clear how this work will fit into a broader research agenda, or if this effort needs to involve other key organizations, such as the Centers for Disease Control and Prevention.
For the past several decades, computer systems have typically used two digits to represent the year, such as “98” for 1998, in order to conserve electronic data storage and reduce operating costs. In this format, however, 2000 is indistinguishable from 1900 because both are represented as “00.” As a result, if not modified, systems or applications that use dates or perform date- or time-sensitive calculations may generate incorrect results beyond 1999. SSA has been anticipating the change of century since 1989, initiating an early response to the potential crisis. It made significant early progress in assessing and renovating mission-critical mainframe systems—those necessary to prevent the disruption of benefits —and has been a leader among federal agencies. Yet as our report of last October indicated, three key risks remained, mainly stemming from the large degree to which SSA interfaces with other entities in the sharing of information. One major risk concerned Year 2000 compliance of the 54 state Disability Determination Services (DDS) that provide vital support to the agency in administering SSA’s disability programs. The second major risk concerned data exchanges, ensuring that information obtained from outside sources—such as other federal agencies, state agencies, and private businesses—was not “corrupted” by data being passed from systems that were not Year 2000 compliant. SSA exchanges data with thousands of such sources. Third, such risks were compounded by the lack of contingency plans to ensure business continuity in the event of systems failure. Our report made several specific recommendations to mitigate these risks. These included (1) expeditious completion of the assessment of mission-critical systems at state DDS offices and the use of those results to establish specific plans of action, (2) stronger oversight by SSA of DDS Year 2000 activities, (3) discussion of the status of DDS Year 2000 activities in SSA’s quarterly reports to the Office of Management and Budget (OMB), (4) expeditious completion of SSA’s Year 2000 compliance coordination with all data exchange partners, and (5) development of specific contingency plans that articulate clear strategies for ensuring the continuity of core business functions. SSA agreed with all of our recommendations, and actions to complete them are underway. We understand that the states are in various stages of addressing the Year 2000 problem, but note that SSA has begun to monitor these activities; among other things, it is requiring biweekly status reports from the DDSs. Further, as of this week, the agency planned to have a contingency plan available at the end of the month. The resources that SSA plans to invest in acquiring IWS/LAN are enormous: Over 7 years the agency plans to spend about $1 billion during phase I to replace its present computer terminals with “intelligent” workstations and local area networks. As of March 1, SSA had completed installation of about 30,000 IWSs and 800 LANs, generally meeting or exceeding its phase I schedule. The basic intelligent workstation that SSA is procuring includes a (1) 15-inch color display monitor, (2) 100-megahertz Pentium workstation with 32 megabytes (MB) of random access memory, (3) 1.2-gigabyte hard (fixed) disk drive, and (4) 16-bit network card with adaptation cable. Preliminary testing has indicated that the IWS/LAN workstation random access memory will need to be upgraded from 32 MB to at least 64 MB. Last year SSA’s contractor, Unisys Corporation, submitted a proposal to upgrade to a processing speed higher than 100 megahertz at additional cost. Unisys noted that it was having difficulty in obtaining 100-megahertz workstations. Although personal computers available in today’s market are about three times this speed, SSA stated that the 100-megahertz processing speed does meet its current needs. The agency is, however, continuing to discuss this issue with Unisys. As the expected time period for implementation of IWS/LAN will span the change of century, it is obviously important that all components be Year 2000 compliant. SSA’s contract with Unisys does not, however, contain such a requirement. Moreover, SSA has acknowledged, and we have validated, that some of the earlier workstations that it acquired are not Year 2000 compliant. However, SSA maintains—and we have confirmed—that the operating system it has selected for IWS/LAN, Windows NT, corrects the particular Year 2000-related problem. SSA has also said that it is now testing all new hardware and software, including equipment substitutions proposed by Unisys, to ensure Year 2000 compliance before site installation. Phase II is intended to build upon acquisition of the initial IWS/LAN infrastructure, adding new hardware and software—such as database engines, scanners, and bar code readers—to support future process redesign initiatives. Contract award for phase II is planned for fiscal year 1999, with site installations between fiscal years 1999 and 2001. We have not identified any significant problems in SSA’s installation of IWS/LAN equipment at its field offices to date, and the agency has taken steps to minimize adverse impact on service to the public while installation takes place. Some state DDSs, however, have recently raised concerns about lack of control over their networks and inadequate response time on IWS/LAN service calls, resulting in some disruption to their operations. SSA currently maintains central control. Under this arrangement, problems with local equipment must be handled by SSA’s contractor, even though many DDSs feel they have sufficient technical staff to do the job. Because of this issue, states have said that they want SSA to pilot test IWS/LAN in one or more DDS offices to evaluate options that would allow states more flexibility in managing their networks. Florida, in fact, refused to accept more IWS/LAN terminals until this issue is resolved. SSA is now working with the DDSs to identify alternatives for providing the states with some degree of management control. Turning to managing the acquisition of information technology resources as an investment, SSA has—consistent with the Clinger-Cohen Act of 1996 and OMB guidance—followed several essential practices with IWS/LAN. This includes assessing costs, benefits, and risks, along with monitoring progress against competing priorities, projected costs, schedules, and resource availability. What SSA has not established, however, are critical practices for measuring IWS/LAN’s contribution toward improving mission performance. While it does have baseline data and measures that could be used to assess the project’s impact on performance, it lacks specific target goals and a process by which overall IWS/LAN impact on program performance can be gauged. Further, while OMB guidelines call for post-implementation evaluations to be completed, SSA does not plan to do this. In a September 1994 report, we noted that SSA had initiated action to identify cost and performance goals for IWS/LAN. SSA identified six categories of performance measures that could be used to track the impact of IWS/LAN technology on service delivery goals, and had planned to establish target productivity gains for each measure upon award of the IWS/LAN contract. At the conclusion of our review, however, SSA had not established targeted goals or a process for using performance measures to assess IWS/LAN’s impact on agency productivity improvements. According to officials, the agency has no plans to use these measures in this way because it believes the results of earlier pilots sufficiently demonstrated that savings will be achieved with each IWS/LAN installation, and because the measures had been developed in response to a General Services Administration (GSA) procurement requirement. Since GSA no longer performs this role, SSA sees these actions as no longer necessary. Yet without specific goals, processes, and performance measurements, it will be difficult to assess whether IWS/LAN improves service to the public. Further, the Clinger-Cohen Act requires agencies to develop performance measures to assess how well information technology supports their programs. Knowing how well such technology improvements are actually working will be critical, given the expected jump in SSA’s workload into the next century. The number of disability beneficiaries alone is expected to increase substantially between calendar years 1997 and 2005—from an estimated 6.2 million to over 9.6 million. Concurrent with phase I installation is development of the first major programmatic software application—the Reengineered Disability System (RDS)—to be installed on the IWS/LAN infrastructure. It is intended to support SSA disability claims processing under a new client/server environment. Pilot testing of RDS software to evaluate actual costs and benefits of the system and identify IWS/LAN phase II equipment needs began last August. However, performance and technical problems encountered during the RDS pilot have resulted in a planned 9-month delay—to July 1998—in implementing the pilot system in the first state, Virginia. This will likely cause corresponding delays in SSA’s schedule for acquiring and implementing IWS/LAN phase II equipment, and further delays in national implementation of RDS. How software is developed is another critical consideration; whether the modernized processes will function as intended and achieve the desired gains in productivity will depend in large measure on the quality of the software. Yet software development is widely seen as one of the riskiest areas of systems development. SSA has recognized weaknesses in its own capability to develop software, and is improving its processes and methods. This comes at a critical time, since the agency is beginning development of its new generation of software to operate on the IWS/LAN to support the redesigned work processes of a client/server environment. Significant actions that SSA has initiated include (1) launching a formal software process improvement program, (2) acquiring assistance from a nationally recognized research and development center in assessing its strengths and weaknesses and in assisting with improvement, and (3) establishing management groups to oversee software process improvement activities. Key elements of the software improvement program, however, are still lacking—elements without which progress and success cannot be measured. These are: specific, quantifiable goals, and baseline data to use in assessing whether those goals have been attained. Until such features are available, SSA will lack assurance that its improvement efforts will result in the consistent and cost-effective production of high-quality software. Our report recommends that as part of its recently initiated pilot projects, SSA develop and implement plans that articulate a strategy and time frames for developing baseline data, identifying specific goals, and monitoring progress toward achieving those goals. We are encouraged by SSA’s response, which included agreement and a description of steps it had begun to carry out these recommendations. For over 10 years, SSA has been providing, on request, a Personal Earnings and Benefit Estimate Statement (PEBES). The statement includes a yearly record of earnings, estimates of Social Security taxes paid, and various benefits estimates. Beginning in fiscal year 1995, such statements were sent annually to all eligible U.S. workers aged 60 and over; beginning October 1, 1999, the statements are to be sent to all eligible workers 25 and over—an estimated 123 million people. The public has generally found these to be useful in financial planning. In an effort to provide “world-class service” and be as responsive as possible to the public, SSA in March 1997 initiated on-line dissemination of PEBES to individuals via the Internet. The agency felt that using the Internet in this way would ensure that client data would be safeguarded and confidentiality preserved. Within a month, however, press reports of privacy concerns circulated, sparking widespread fear that the privacy of this information could not be guaranteed. SSA plans many initiatives using the Internet to provide electronic service delivery to its clients. As such, our testimony of last May before the Subcommittee on Social Security focused on Internet information security in general, describing its risks and approaches to making it more secure. The relative insecurity of the Internet makes its use as a vehicle for transmitting sensitive information—such as Social Security information—a decision requiring careful consideration. It is a question of balancing greater convenience against increased risk—not only that information would be divulged to those who should not have access to it, but also that the database itself could be compromised. For most organizations, a prudent approach to information security is three-pronged, including the ability to protect against security breaches at an appropriate level, detect successful breaches, and react quickly in order to track and prosecute offenders. The Internet security issue remains a daunting one, and SSA—like other federal agencies—will have to rely on commercial solutions and expert opinion; this is, however, an area in which there is no clear consensus. Shortly before our May testimony, the Acting Commissioner suspended on-line PEBES availability, promising a reexamination of the service that would include public forums around the country. After analyzing the results of those forums, the Acting Commissioner announced last September that a modified version of the on-line PEBES system would be available by the end of 1997. The new Commissioner, however, has placed implementation of the new system on hold. SSA has hired a private contractor to assess the risk of the modified system; we see this as an important, welcome step in determining the vulnerabilities involved in the use of the Internet. In summary, it is clear that SSA has made progress in dealing with its information technology challenges; it is equally clear, however, that such challenges will continue to face the agency, especially as it transitions to a new processing environment while concurrently dealing with the coming change of century. As a prime face of the government to virtually every American citizen, the stakes in how well the agency meets these continuing challenges are high. This concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittees may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the information technology challenges facing the Social Security Administration and its recently appointed commissioner. GAO noted that: (1) SSA made significant early progress in assessing and renovating mission-critical mainframe systems--those necessary to prevent the disruption of benefits--and has been a leader among federal agencies; (2) yet as GAO's report of last October indicated, three key risks remained, mainly stemming from the large degree to which SSA interfaces with other entities in the sharing of information; (3) one major risk concerned year 2000 compliance of the 54 state Disability Determination Services (DDS) that provide vital support to the agency in administering SSA's disability programs; (4) the second major risk concerned data exchanges, ensuring that information obtained from outside sources--such as other federal agencies, state agencies, and private businesses--was not corrupted by data being passed from systems that were not year 2000 compliant; (5) SSA exchanges data with thousands of such sources; (6) third, such risks were compounded by the lack of contingency plans to ensure business continuity in the event of systems failure; (7) the resources that SSA plans to invest in acquiring Intelligent Workstation/Local Area Network (IWS/LAN) are enormous; (8) over 7 years the agency plans to spend about $1 billion during phase I to replace its present computer terminals with intelligent workstations and local area networks; (9) as of March 1, SSA had completed installation of about 30,000 IWSs and 800 LANs, generally meeting or exceeding its phase I schedule; (10) GAO has not identified any significant problems in SSA's installation of IWS/LAN equipment at its field offices to date, and the agency has taken steps to minimize adverse impact on service to the public while installation takes place; (11) at the conclusion of GAO's review, however, SSA had not established targeted goals or a process or using performance measures to asses IWS/LAN's impact in agency productivity improvements; (12) SSA has recognized weaknesses in its own capability to develop software, and is improving its processes and methods; and (13) SSA plans many initiatives using the Internet to provide electronic service delivery to its clients.
While many estates are kept open for legitimate reasons, we found that FSA field offices do not systematically determine the eligibility of all estates kept open for more than 2 years, as regulations require, and when they do conduct eligibility determinations, the quality of the determinations varies. Without performing annual determinations, an essential management control, FSA cannot identify estates being kept open primarily to receive these payments and be assured that the payments are proper. Generally, under the 1987 Act, once a person dies, farm program payments may continue to that person’s estate under certain conditions. For most farm program payments, USDA regulations allow an estate to receive payments for the first 2 years after the death of the individual if the estate meets certain eligibility requirements for active engagement in farming. Following these 2 years, the estate can continue to receive program payments if it meets the active engagement in farming requirement and the local field office determines that the estate is not being kept open primarily to continue receiving program payments. Estates are commonly kept open for longer than 2 years because of, among other things, asset distribution and probate complications, and tax and debt obligations. However, FSA must annually determine that the estate is still active and that obtaining farm program payments is not the primary reason it remains open. Our review of FSA case file documents found the following. First, we found FSA did not consistently make the required annual determinations. Only 39 of the 181 estates we reviewed received annual eligibility determinations for each year they were kept open beyond the initial 2 years FSA automatically allows, although we found shortcomings with these determinations, as discussed below. In addition, 69 of the 181 estates had at least one annual determination between 1999 and 2005, but not with the frequency required. Indeed, the longer an estate was kept open, the less likely it was to receive all required determinations. For example, only 2 of the 36 estates requiring a determination every year over the 7-year period, 1999 through 2005, received all seven required determinations. FSA did not conduct any program eligibility determinations for 73, or 40 percent, of the 181 estates that required a determination from 1999 through 2005. Because FSA did not conduct the required determinations, the extent to which these estates remained open for reasons other than for obtaining program payments is not known. Sixteen of these 73 estates received more than $200,000 in farm program payments and 4 received more than $500,000 during this period. In addition, 22 of the 73 estates had received no eligibility determinations during the 7-year period we reviewed, and these estates had been open and receiving payments for more than 10 years. In one case, we found that the estate has been open since 1973. The following estates received farm program payments but did not receive FSA eligibility determinations for the period we reviewed: A North Dakota estate received farm program payments totaling $741,000 from 1999 through 2003. An Alabama estate—opened since 1981—received payments totaling $567,000 from 1999 through 2005. Two estates in Georgia—opened since 1989 and 1996, respectively— received payments totaling more than $330,000 each, from 1999 through 2005. A New Mexico estate, open since 1991, received $320,000 from 1999 through 2005. Second, even when FSA conducted at least one eligibility determination, we found shortcomings. FSA sometimes approved eligibility for payments when the estate had provided insufficient information—that is, either no information or vague information. For example, in 20 of the 108 that received at least one eligibility determination, the minutes of FSA county committee meetings indicated approval of eligibility for payments to these estates, but the associated files did not contain any documents that explained why the estate remained active. FSA also approved eligibility on the basis of insufficient explanations for keeping the estate open. In five cases, executors explained that they did not want to close the estate but did not explain why. In a sixth case, documentation stated that the estate was remaining active upon the advice of its lawyers and accountants, but did not explain why. Some FSA field offices approved program payments to groups of estates kept open after 2 years without any apparent determination. In one case in Georgia, minutes of an FSA county committee meeting listed 107 estates as eligible for payments by stating that the county committee approved all estates open over 2 years. Two of the estates on this list of 107 were part of the sample that we reviewed in detail. In addition, another 10 estates in our sample, from nine different FSA field offices, were also approved for payments without any indication that even a cursory determination had been conducted. Third, the extent to which FSA field offices make eligibility determinations varies from state to state, which suggests that FSA is not consistently implementing its eligibility rules. Overall, FSA field offices in 16 of the 26 states we reviewed made less than one-half of the required determinations of their estates from 1999 to 2005. The percentage of estates reviewed by FSA ranged from 0 to 100 percent in the states we reviewed. Eligibility determinations could also uncover other problems. Under the three-entity rule, individuals receiving program payments may not hold a substantial beneficial interest in more than two entities also receiving payments. However, because a beneficiary of an Arkansas estate we reviewed received farm program payments through the estate in 2005, as well as through three other entities, the beneficiary was able to receive payments beyond what the three-entity rule would have allowed. FSA was unaware of this situation until we brought it to officials’ attention, and FSA has begun taking steps to recover any improper payments. Had FSA conducted any eligibility determinations for this estate during the period, it might have determined that the estate was not eligible for these payments, preventing the beneficiary from receiving what amounted to a payment through a fourth entity. We informed FSA of the problems we uncovered during the course of our review. According to FSA field officials, a lack of sufficient personnel and time, and competing priorities for carrying out farm programs explain, in part, why many determinations were either not conducted or not conducted thoroughly. Nevertheless, officials told us that they would investigate these cases for potential receipt of improper payments and would start collection proceedings if they found improper payments. FSA cannot be assured that millions of dollars in farm program payments it made to thousands of deceased individuals from fiscal years 1999 through 2005 were proper because it does not have appropriate management controls, such as computer matching, to verify that it is not making payments to deceased individuals. In particular, FSA is not matching recipients listed in its payment databases with individuals listed as deceased in the Social Security Administration’s Death Master File. In addition, complex farming operations, such as corporations or general partnerships with embedded entities, make it difficult for FSA to prevent improper payments to deceased individuals. FSA paid $1.1 billion in farm program payments in the names of 172,801 deceased individuals—either as individuals or as members of entities, from fiscal years 1999 through 2005, according to our matching of FSA’s payment databases with the Social Security Administration’s Death Master File. Of the $1.1 billion in farm payments, 40 percent went to individuals who had been dead for 3 or more years, and 19 percent went to individuals who had been dead for 7 or more years. Figure 1 shows the number of years in which FSA made farm program payments after an individual had died and the value of those payments. We identified several instances in which FSA’s lack of management controls resulted in improper payments to deceased individuals. For example, FSA provided more than $400,000 in farm program payments from 1999 through 2005 to an Illinois farming operation on the basis of the ownership interest of an individual who had died in 1995. According to FSA’s records, the farming operation consisted of about 1,900 cropland acres producing mostly corn and soybeans. It was organized as a corporation with four shareholders, with the deceased individual owning a 40.3-percent interest in the entity. Nonetheless, we found that the deceased individual had resided in Florida. Another member of this farming operation, who resided in Illinois and had signature authority for the operation, updated the operating plan most recently in 2004 but failed to notify FSA of the individual’s death. The farming operation therefore continued to qualify for farm program payments on behalf of the deceased individual. As noted earlier, FSA requires farming operations to certify that they will notify FSA of any change in their operation and to provide true and correct information. According to USDA regulations, failure to do so may result in forfeiture of payments and an assessment of a penalty. FSA recognized this problem in December 2006 when the children of the deceased individual contacted the FSA field office to obtain signature authority for the operation. FSA has begun proceedings to collect the improper payments. USDA recognizes that its farm programs have management control weaknesses, making them vulnerable to significant improper payments. In its FY 2006 Performance and Accountability Report to the Office of Management and Budget, USDA reported that poor management controls led to improper payments to some farmers, in part because of incorrect or missing paperwork. In addition, as part of its reporting of improper payments information, USDA identified six FSA programs susceptible to significant risk of improper payments with estimated improper payments totaling over $2.8 billion in fiscal year 2006, as shown in table 1. Farm program payments made to deceased individuals indirectly—that is, as members of farming entities—represent a disproportionately high share of post-death payments. Specifically, payments to deceased individuals through entities accounted for $648 million—or 58 percent of the $1.1 billion in payments made to all deceased individuals from 1999 through 2005. In contrast, payments to all individuals through entities accounted for $35.6 billion—or 27 percent of the $130 billion in farm program payments FSA provided from 1999 through 2005. The complex nature of some types of farming entities, in particular, corporations and general partnerships, increases the potential for improper payments. For example, a significant portion of farm program payments went to deceased individuals who were members of corporations and general partnerships. Deceased individuals identified as members of corporations and general partnerships received nearly three- quarters of the $648 million that went to deceased individuals in all entities. The remaining one-quarter of payments went to deceased individuals of other types of entities, including estates, joint ventures, limited partnerships, and trusts. With regard to the number of deceased individuals who received farm program payments through entities, they were most often members of corporations and general partnerships. Specifically, of the 39,834 deceased individuals who received farm program payments through entities, about 57 percent were listed in FSA’s databases as members of corporations or general partnerships. Furthermore, of the 172,801 deceased individuals identified as receiving farm program payments, 5,081 received more than one payment because (1) they were a member of more than one entity, or (2) they received payments as an individual and were a member of one or more entities. According to FSA field officials, complex farming operations, such as corporations and general partnerships with embedded entities, make it difficult for FSA to prevent making improper payments to deceased individuals. In particular, in many large farming operations, one individual often holds signature authority for the entire farming operation, which may include multiple members or entities. This individual may be the only contact FSA has with the operation; therefore, FSA cannot always know that each member of the operation is represented accurately to FSA by the signing individual for two key reasons. First, it relies on the farming operation to self-certify that the information provided is accurate and that the operation will inform FSA of any operating plan changes, which would include the death of an operation’s member. Such notification would provide USDA with current information to determine the eligibility of the operation to receive the payments. Second, FSA has no management controls, such as computer matching of its payment databases with the Social Security Administration’s Death Master File, to verify that an ongoing farming operation has failed to report the death of a member. FSA has a formidable task—ensuring that billions of dollars in program payments are made only to estates and individuals that are eligible to receive them. The shortcomings we have identified underscore the need for improved oversight of federal farm programs. Such oversight can help to ensure that program funds are spent as economically, efficiently, and effectively as possible, and that they benefit those engaged in farming as intended. In our report, we recommended that USDA conduct all required annual estate eligibility determinations, implement management controls to verify that an individual receiving program payments has not died, and determine if improper payments have been made to deceased individuals or to entities that failed to disclose the death of a member, and if so, recover the appropriate amounts. USDA agreed with these recommendations and has already begun actions to implement them. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Lisa Shames, Director, Natural Resources and Environment, (202) 512-3841 or [email protected]. Key contributors to this testimony were James R. Jones, Jr., Assistant Director; Thomas M. Cook; and Carol Herrnstadt Shulman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Farmers receive about $20 billion annually in federal farm program payments, which go to individuals and "entities," including corporations, partnerships, and estates. Under certain conditions, estates may receive payments for the first 2 years after an individual's death. For later years, the U.S. Department of Agriculture (USDA) must determine that the estate is not being kept open primarily to receive farm program payments. This testimony is based on GAO's report, Federal Farm Programs: USDA Needs to Strengthen Controls to Prevent Improper Payments to Estates and Deceased Individuals ( GAO-07-818 , July 9, 2007). GAO discusses the extent to which USDA (1) follows its regulations that are intended to provide reasonable assurance that farm program payments go only to eligible estates and (2) makes improper payments to deceased individuals. USDA has made farm program payments to estates more than 2 years after recipients died, without determining, as its regulations require, whether the estates were kept open to receive these payments. As a result, USDA cannot be assured that farm payments are not going to estates kept open primarily to obtain these payments. From 1999 through 2005, USDA did not conduct any of the required eligibility determinations for 73, or 40 percent, of the 181 estates GAO reviewed. Sixteen of these 73 estates had each received more than $200,000 in farm payments, and 4 had each received more than $500,000. Only 39 of the 181 estates received all annual determinations as required. Even when FSA conducted determinations, we found shortcomings. For example, some USDA field offices approved groups of estates for payments without reviewing each estate individually or without a documented explanation for keeping the estate open. USDA also cannot be assured that it is not making improper payments to deceased individuals. For 1999 through 2005, USDA paid $1.1 billion in farm payments in the names of 172,801 deceased individuals (either as an individual recipient or as a member of an entity). Of this total, 40 percent went to those who had been dead for 3 or more years, and 19 percent to those dead for 7 or more years. Most of these payments were made to deceased individuals indirectly (i.e., as members of farming entities). For example, over one-half of the $1.1 billion in payments went through entities from 1999 through 2005. In one case, USDA paid a member of an entity--deceased since 1995--over $400,000 in payments for 1999 through 2005. USDA relies on a farming operation's self-certification that the information it provides USDA is accurate; operations are also required to notify USDA of any changes, such as the death of a member. Such notification would provide USDA with current information to determine the eligibility of the operation to receive payments. The complex nature of some farming operations--such as entities embedded within other entities--can make it difficult for USDA to avoid making payments to deceased individuals.
Following the terrorist attacks on September 11, 2001, the President signed under the authority of the Stafford Act a major disaster declaration for the state of New York. The presidential declaration allowed the state of New York to apply for federal assistance to help recover from the disaster. FEMA was responsible for coordinating the federal response to the September 11 terrorist attacks and providing assistance through a variety of programs, including the CCP. The CCP was authorized in section 416 of the Stafford Act to help alleviate the psychological distress caused or aggravated by disasters declared eligible for federal assistance by the President. Through the CCP, FEMA released federal grant awards to supplement the state of New York’s ability to respond to the psychological distress caused by the September 11 terrorist attacks through the provision of short-term crisis counseling services to victims and training for crisis counselors. FEMA relied on SAMHSA to provide expertise related to crisis counseling and public education for Project Liberty. FEMA assigns SAMHSA’s responsibilities for the CCP through an annual interagency agreement. These responsibilities included, among other things, providing technical assistance, monitoring the progress of programs conducted under the CCP, and performing program oversight. Within SAMHSA, the Center for Mental Health Services (CMHS) carried out these responsibilities for Project Liberty. CMHS received support from SAMHSA’s Division of Grants Management, which provides grant oversight. NYS OMH established Project Liberty under the CCP to offer crisis counseling and public education services throughout the five boroughs of New York City and 10 surrounding counties free of charge to anyone affected by the World Trade Center disaster and its aftermath. The areas served by Project Liberty are shown in the shaded areas of figure 1. The state of New York’s primary role was to administer, oversee, and guide Project Liberty’s program design, implementation, and evaluation and pay service providers, but not to provide services itself. New York City and the surrounding counties contracted with over 200 service providers and were responsible for overseeing day-to-day activities. Figure 2 shows the organization structure of Project Liberty at the federal, state, and local levels. Under the CCP, Project Liberty’s goal was to serve New York City and the 10 surrounding counties by assisting those affected by the September 11 terrorist attacks to recover from their psychological reactions and regain predisaster level of functioning. The CCP supports services that are short- term interventions with individuals and groups experiencing psychological reactions to a presidentially declared disaster and its aftermath. Crisis counseling services were primarily delivered to disaster survivors through outreach, or face-to-face contact with survivors in familiar settings (e.g., neighborhoods, churches, community centers, and schools). Although the CCP does not support long-term, formal mental health services such as medications, office-based therapy, diagnostic services, psychiatric treatment, or substance abuse treatment, FEMA approved an enhanced services program for Project Liberty. This enhanced services program allowed for an expansion of services, including enhanced screening methods; a broader array of brief counseling approaches; and additional training, technical assistance, and supervision to a set of service providers. These enhanced services were intended to address the needs of individuals who continued to experience trauma symptoms and functional impairment after initial crisis counseling but did not need long-term mental health services. Project Liberty was funded through two separate, but related, grant programs: the ISP and RSP. The ISP grant was designed to fund Project Liberty for the first 60 days following the disaster declaration. Because there was a continuing need for crisis counseling services, the ISP was extended to last about 9 months, until the RSP began. The RSP grant was awarded and was designed to continue to provide funding for an additional 9 months of crisis counseling services, but was extended to last for 2 ½ years. Figure 3 shows key milestones for Project Liberty. For the approved ISP application, FEMA made funds available directly to the state. Under the RSP, after approval, funds were transferred from FEMA to SAMHSA, which awarded the grant to the state of New York through SAMHSA’s grants management process. The state of New York, in turn, disbursed funds to the service providers and local governments through the Research Foundation for Mental Hygiene, Inc. (RFMH), a not- for-profit corporation affiliated with the state of New York that assists with financial management of federal and other grants awarded to NYS OMH. Figure 4 shows the flow of funds for Project Liberty’s ISP and RSP. Service providers were required to submit claims and supporting documentation to receive reimbursement for expenses incurred to provide services. As shown in figure 5, these claims were to have multiple levels of review to determine whether the expenses claimed were allowable under the CCP’s fiscal guidelines. This review structure, which placed primary responsibility for reviewing claims on the local government units, was based on the state of New York’s existing grant management policies. Additional controls for Project Liberty included (1) NYS OMH site visits to service providers in New York City and surrounding counties; (2) closeout audits by independent auditors of certain New York City service providers to test whether claims were documented and allowable; and (3) annual audits of New York City and surrounding counties conducted under the Single Audit Act, which requires independent auditors to provide an opinion on whether the financial statements are fairly presented, a report on internal control related to the major programs, and a report on compliance with key laws, regulations, and the provisions of the grant agreements. Our publication, Standards for Internal Control in the Federal Government, provides a road map for entities to establish control for all aspects of their operations and a basis against which entities can evaluate their control structures. The five components of internal control are as follows: Control environment. Creating a culture of accountability within the entire organization—program offices, financial services, and regional offices—by establishing a positive and supportive attitude toward the achievement of established program outcomes. Risk assessment. Identifying and analyzing relevant problems that might prevent the program from achieving its objectives. Developing processes that can be used to form a basis for measuring actual or potential effects of these problems and manage their risks. Control activities. Establishing and implementing oversight processes to address risk areas and help ensure that management’s decisions— especially about how to measure and manage risks—are carried out and program objectives are met. Information and communication. Using and sharing relevant, reliable, and timely information on program-specific and general financial risks. Such information surfaces as a result of the processes—or control activities—used to measure and address risks. Monitoring. Tracking improvement initiatives over time and identifying additional actions needed to further improve program efficiency and effectiveness. SAMHSA and FEMA were responsible for providing oversight to ensure that the state of New York had a reasonable level of controls in place. Although FEMA retained responsibility for providing leadership and direction for Project Liberty, it assigned primary responsibility to SAMHSA for oversight and monitoring through an interagency agreement. Approximately $121 million, more than three-quarters of the $154.9 million in federal funds provided to Project Liberty, were reported as expended as of September 30, 2004, leaving a remaining balance of $33.9 million. About $32 million of the $33.9 million pertain to unresolved NYC DOEd expense claims. According to NYS OMH, NYC DOEd had not been reimbursed for the Project Liberty expenses it incurred throughout the program because NYC DOEd had not been able to provide support for these expenses that met the CCP documentation standards for reimbursement under federal grants. NYS OMH began considering alternative indirect forms of evidence, including internal control summary memos prepared by NYC DOEd, to begin paying NYC DOEd’s expense claims. As of March 31, 2005, NYS OMH had accepted alternative forms of supporting evidence to pay $5.2 million of NYC DOEd expense claims; however, this type of alternative evidence provides only limited assurance of the propriety of the claimed amounts. NYS OMH was not sure when and how the remaining NYC DOEd expense claims would be resolved. For the period September 11, 2001, through September 30, 2004, Project Liberty reported that it had expended all of the $22.8 million ISP grant and about $98.2 million of the $132.1 million RSP grant for total reported expenditures of approximately $121 million, leaving a remaining balance $33.9 million. Although crisis counseling services had been phased out as of December 31, 2004, Project Liberty will continue to use the remaining grant funds to process claims for reimbursement of program-related expenses incurred through December 31, 2004, and to cover administrative expenses during the closeout period, which at the end of our fieldwork, was scheduled to end on May 30, 2005. Table 1 and figure 6 show the timing and amount of expenditures reported by Project Liberty for the ISP and RSP grants by quarter through September 30, 2004, compared to the total CCP grant awards for Project Liberty. According to NYS OMH officials, the expenditures reported by Project Liberty from September 11, 2001, through September 30, 2004, included expenses incurred as well as amounts advanced to service providers. During the RSP, Project Liberty made advances to 109 service providers, for a total of about $25.8 million. As of September 30, 2004, the outstanding advance balance was $5.8 million; however, according to an NYS OMH official, the balance had been reduced to $1.2 million as of March 31, 2005. The vast majority of remaining Project Liberty funds related to unresolved expense claims of NYC DOEd. As of March 31, 2005, NYS OMH officials told us that NYC DOEd had submitted claims for a portion of the $32 million that was budgeted to NYC DOEd to provide crisis counseling services to New York City school children, and planned to ultimately submit claims for the full amount. NYS OMH and NYC DOHMH had not approved the majority of NYC DOEd claims for reimbursement incurred during the RSP because NYC DOEd had not provided support for these expenses that met the CCP documentation standards for reimbursement under federal grants. These standards require that the expenditure of grant funds be supported by detailed documentation, such as canceled checks, paid bills, time and attendance records, and contract and subgrant award documents. According to NYC DOEd officials, they could not meet the documentation standards established by NYS OMH because (1) NYC DOEd reorganized on July 1, 2003, which coincided with the delivery of crisis counseling services under the Project Liberty grant, resulting in significant loss of staff, loss of institutional knowledge, and therefore lost or diminished ability to retrieve supporting documentation, and (2) NYC DOEd’s complex financial systems cannot produce the type of transaction-specific documentation required by NYS OMH and makes the process of retrieving supporting documentation unwieldy and administratively burdensome. A SAMHSA official told us SAMHSA was aware of issues involving the supporting documentation for the NYC DOEd expense claims; however, because officials viewed it as a grantee issue, they have had limited involvement with NYS OMH’s efforts to resolve these issues. NYS OMH decided to consider alternative evidence, including supplemental supporting documentation in the form of internal control summary memos prepared by NYC DOEd that describe the controls over payments for personnel, other-than-personnel, and community-based organization expenses. Personnel expenses include NYC DOEd workers while the other-than-personnel expenses include other costs incurred directly by NYC DOEd. The community-based organization expenses are those incurred by other service providers on behalf of NYC DOEd. Although NYC DOEd’s Chief Financial Officer has signed an attestation stating that the controls described in the summary memos for personnel and other-than-personnel expenses were in place and working during Project Liberty, the level of assurance provided by these internal control summary memos is limited for several reasons. First and foremost, the memos do not provide the type of supporting documentation necessary to verify the validity of the claimed expenses as required by the federal documentation standards. Second, the memos are not certified by an external source, such as an independent auditor. Third, the memos were prepared solely to support NYC DOEd’s Project Liberty expenses and may not represent written policies and procedures that existed during the time the claimed expenditures were incurred. And lastly, the memos were prepared toward the end of the program by officials who did not, in all cases, have firsthand knowledge of the controls that existed during the program. As of March 31, 2005, NYS OMH and NYC DOHMH had reviewed and accepted internal control summary memos that describe the controls over payments for personnel and other-than-personnel expenses, and NYS OMH had used these memos and other alternative forms of evidence to reimburse NYC DOEd for $5.2 million in expense claims. These other forms of evidence included observations of services being provided during site visits, the existence of encounter logs evidencing that some services had been provided, and general familiarity with service providers. NYS OMH officials were not sure when they would complete the review of the memo covering the controls over payments for community-based organization expenses and how this memo, along with other alternative forms of evidence, would be used to resolve the remaining $26.8 million in NYC DOEd expense claims. As part of its approval process for expense claims, NYS OMH relied upon NYC DOHMH to certify that the claims submitted were valid and met the CCP documentation requirements. However, because NYC DOEd did not provide the required supporting documentation, NYC DOHMH could not perform the same level of review as it did for the claims of the other Project Liberty service providers. Further, although NYC DOHMH contracted with independent auditors to perform audits of expense claims of certain service providers for Project Liberty, there were no audits performed of NYC DOEd claims, which are expected to total approximately $32 million. At the end of our audit fieldwork, it was not clear when and how the remaining expense claims would be resolved. However, if the internal control summary memos and other alternative evidence continue to be the primary supporting documentation for $32 million in NYC DOEd expense claims, the federal government will have only limited assurance that these payments are an appropriate use of Project Liberty grant funds. FEMA’s process for determining funding is designed to be implemented quickly after a state requests federal assistance to recover from a presidentially declared disaster. The state of New York’s grant applications for Project Liberty were developed during the initial dynamic stages of the recovery effort when damage reports and response plans were subject to frequent change. The budgets submitted with the grant applications were revised by the grantee to satisfy certain conditions of grant award. However, we found that although the budgets were developed using estimates established during the initial stages of the disaster, FEMA and SAMHSA never required the state of New York to formally submit revised budget requests to reflect new information and significant changes to the program that occurred as the needs of the affected population became better identified. As a result, FEMA and SAMHSA did not have realistic budget information that could be used to effectively assess how responsible city and state officials planned to spend Project Liberty grant funds. The grant applications that the state of New York submitted to FEMA for Project Liberty were prepared with assistance from FEMA and SAMHSA and included a needs assessment, plan of services, and budget. The needs assessment, which was based on a formula developed by SAMHSA, was the state’s estimate of the number of people who would need crisis counseling. The plan of services described the state’s plan for treating the identified population, including segments of the population needing special services or outreach methods such as counseling and training in various languages. The budget was developed based on the estimated cost to treat the population identified in the needs assessment through the program outlined in the plan of services. FEMA and SAMHSA provided the state of New York the flexibility to submit grant applications that reflected its identified and estimated needs, which were based on information available at the time. In preparing the budget, the state of New York relied on SAMHSA’s Budget Estimating and Reporting Tool, which was designed to assist states in developing budgets consistent with FEMA guidelines. The state of New York took two different approaches in constructing the ISP and RSP budgets for Project Liberty. The ISP budget used estimates of administrative costs and a simple direct services cost calculation. The direct services costs were based on the estimated number of people needing crisis counseling services, the estimated average length of treatment each person would need, the estimated hourly rate for crisis counselors, and the estimated length of the ISP. The RSP budget, on the other hand, was prepared by the state of New York based on estimates provided by NYS OMH, each of the New York City boroughs, and the 10 surrounding counties eligible for CCP grant funding. Once the state of New York submitted its ISP and RSP grant applications, FEMA had processes in place to review and approve them. Although the processes differed, both shared common elements. The first step for both applications was a technical review conducted by the FEMA regional office with jurisdiction over the state of New York to ensure that the applications had a direct link to the September 11 terrorist attacks. Once this technical review was completed, the applications were sent to FEMA headquarters and to SAMHSA for review and comment. In addition, the RSP was reviewed by a panel of mental health professionals who had experience with CCP grants. The ISP and RSP review processes also differed in that FEMA’s regional office had final decision authority for the ISP application while FEMA headquarters had final decision authority for the RSP application. Figure 7 shows the application processes for the ISP and RSP. After the reviews conducted by FEMA and SAMHSA were completed, FEMA awarded the state of New York $22.7 million for the ISP on September 24, 2001, with subsequent amendments bringing the ISP total to $22.8 million. In addition, FEMA awarded the state of New York $132.1 million for the RSP immediately after the ISP ended on June 14, 2002. Because FEMA’s process for determining funding is designed to be implemented quickly after presidential disaster declarations and official loss numbers were not known at the time the Project Liberty applications were prepared, the state of New York used estimates of the number of people who would need crisis counseling services, the length of the program, and the services that would be provided. However, FEMA and SAMHSA never required the budgets to be modified to reflect new information or significant changes to the program. The estimates used by the state of New York to develop its initial needs assessment, or number of people it believed would need crisis counseling services, included several risk factors and loss categories. In keeping with existing CCP policy, FEMA and SAMHSA encouraged the state of New York to modify the needs assessment formula by adjusting the loss categories of affected persons and the risk factors for each of those loss categories to better reflect the situation in New York. The state of New York also estimated the number of direct victims in each loss category because official numbers were not available. For example, the official number of deaths was not known until more than 2 years after the disaster. As a result of these estimates, the needs assessment for the state of New York’s ISP application determined that 2.3 million people would need crisis counseling services as a result of the terrorist attacks on September 11, 2001. With the RSP application, the needs assessment formula was modified to estimate the pervasive reactions to the disaster and to update the loss category numbers, such as the number of people dead, missing, or hospitalized. These modifications increased the estimate of the people who would need crisis counseling to 3.4 million. We found that based on the approved budgets for the ISP and RSP, Project Liberty estimated that it would need $154.9 million to provide crisis counseling and public education to the estimated 3.4 million people and training for Project Liberty staff who would be delivering these services, at a cost of approximately $46 per person. In its report for the period ending September 30, 2004, Project Liberty estimated that it had provided crisis counseling to 1.5 million people at a cost of $121 million, or approximately $83 per person. Another estimate used in preparing the grant applications was the length of time needed to carry out the services identified in the plan of services. The state of New York used the maximum length of service provision allowed by FEMA regulations in its ISP and RSP applications, 60 days and 9 months, respectively. However, crisis counseling services were actually provided for approximately 9 months under the ISP and over 30 months under the RSP. In addition, the state of New York initially understood that crisis counseling and public education services offered by Project Liberty would be limited to the services normally allowed by the CCP, such as short-term individual and group crisis counseling, community outreach, and training for crisis counselors. However, in August 2002, Project Liberty was authorized to adjust the program to include enhanced services and began providing these services in May 2003. Other significant changes, which were not reflected in Project Liberty’s budget, included a reallocation from New York City’s budget to the NYC DOEd, which increased NYC DOEd’s budget from $8.9 million to $40 million and subsequent reductions of NYC DOEd’s budget to $32 million. Despite these major changes in the program, FEMA and SAMHSA did not require and Project Liberty did not prepare adjusted budgets to reflect their revised plans for meeting the needs of the victims of September 11. Therefore, New York State and City officials did not have realistic budget information to use as a tool to manage program funds, and FEMA and SAMHSA were not in a position to effectively assess the planned use of the funds. While SAMHSA provided oversight of Project Liberty’s delivery of services, it provided only limited oversight of financial information reported by Project Liberty about the cost of those services. SAMHSA received periodic financial reports but did not perform basic analyses of expenditures to obtain a specific understanding of how Project Liberty was using federal funds. In addition, as discussed above, budget information was outdated and therefore an ineffective tool to monitor actual expenditures. SAMHSA’s limited level of oversight over Project Liberty’s financial information was driven in part by its assessment that the program was not high risk, but this assessment did not fully consider the magnitude, complexity, and unique nature of the program and was not revisited even after significant program changes occurred. As a result, SAMHSA was not in a position to exercise a reasonable level of oversight to ensure that funds were used efficiently and effectively in addressing the needs of those affected by the September 11 terrorist attacks. SAMHSA’s oversight for Project Liberty included review of service delivery information and identification of unusual items included in Project Liberty’s program reports, eight site visits, and routine communication with NYS OMH and FEMA. These oversight activities helped SAMHSA gain assurance that NYS OMH was delivering appropriate services. However, SAMHSA’s oversight of these services did not directly link with, and therefore did not provide assurance related to, financial information reported by Project Liberty. In addition to requiring Project Liberty to submit budgets to show how it planned to use federal funds, FEMA regulations also required Project Liberty to periodically submit financial reports to show how funds were actually spent. Required financial reports included quarterly expenditure reports, a final accounting of funds, and a final voucher. SAMHSA officials told us they did some high-level review of the financial information provided to determine how quickly the program was using grant funds and when the grant funds should be made available to NYS OMH. However, they did not perform basic analyses of expenditures to obtain a specific understanding of how Project Liberty was using federal funds. We found that SAMHSA did not use financial information submitted by Project Liberty to conduct basic analytical reviews of how funds were being spent and whether this spending was consistent with the budgeted program expenditures. Table 2 illustrates a basic analysis we performed of Project Liberty’s reported and budgeted expenditures for the period June 14, 2002, through September 30, 2004, which identified significant differences by category between reported expenditures and budget. Notwithstanding the fact that budgets were not updated for major program changes, several of these differences should have raised questions about whether Project Liberty was using federal funds within allowable categories and within its approved budget. For example, the Project Liberty personnel budget was over $93 million; however, as of September 30, 2004, over 26 months into the program that was initially planned for completion in 9 months, it had reported personnel expenditures of only about $26 million, for a difference of about $67 million. A SAMHSA official said that because of the way Project Liberty reported its expenditures, SAMHSA officials could not track its financial reports to its budget. As a result, we found that SAMHSA was not aware of the significant variations between Project Liberty’s reported expenditures and budget and did not make inquiries of Project Liberty officials to obtain an understanding of why these variations were occurring. Some of the differences between reported and budgeted expenditures may have resulted from the fact that Project Liberty was not required to formally adjust the initial program budget to reflect significant changes. However, the differences may also have raised questions about whether SAMHSA’s understanding of how the program was planning to spend funds was consistent with actual spending patterns. Comparisons between Project Liberty’s reported budget and expenditures could have helped SAMHSA better assess the status of the program to allow it to take effective action to ensure that Project Liberty was using federal funds to provide the most value for victims of the September 11 terrorist attacks. The differences between Project Liberty’s reported budget and expenditures may also have been caused by inconsistencies in financial information submitted by Project Liberty. FEMA and SAMHSA did not provide detailed guidance on how to classify CCP expenditures but instead left Project Liberty to interpret how expenditures should be classified. We found that Project Liberty expenditures were not always consistently reported to FEMA and SAMHSA. For example, Project Liberty did not consistently classify evaluation expenditures. If an NYS OMH employee was evaluating the program, the expenditure was classified as personnel, but if the work was contracted to someone outside of NYS OMH, the expenditure was classified as evaluation. As a result, SAMHSA could not reliably use Project Liberty’s financial reports to determine how much it cost to evaluate the program. By obtaining a better understanding of how federal funds were spent by Project Liberty, SAMHSA would have improved its ability to determine whether funds were used most efficiently and effectively in carrying out the objectives of the program. SAMHSA’s limited oversight of Project Liberty’s financial information was driven in part by its own assessment that the program was not high risk. SAMHSA’s oversight of Project Liberty included an initial assessment of the risk associated with the grantee. SAMHSA applied risk factors identified in HHS regulations regarding grants, including financial management issues, such as financial stability and experience in handling federal grants, to RFMH, the fiscal agent for NYS OMH responsible for making payments to service providers. For example, SAMHSA reviewed the result of RFMH’s fiscal year 2001 financial audit that was required by the Single Audit Act and found that RFMH received an unqualified audit opinion while handling a total of about $62 million in federal funds. SAMHSA concluded that RFMH had a strong track record for handling federal funds and classified RFMH as not high risk. Based in part on this risk assessment, SAMHSA officials told us staff with financial backgrounds were not actively involved in the oversight of Project Liberty. However, we found that SAMHSA’s risk assessment only considered risks associated with RFMH and did not consider other potential risks associated with the Project Liberty grant. For example, the assessment did not consider all significant interactions in the complex federal, state, and local government environment that existed for Project Liberty; the amount of the RSP grant award, which was the largest RSP grant ever made by FEMA; or the geographic complexities of the program, including the size of the area affected and the diversity of the community being served. In addition, SAMHSA did not revisit its initial risk assessment even after the program encountered significant changes and challenges, including the design of the first-ever enhanced services program and the documentation issues with NYC DOEd expenses, which have yet to be resolved. As a result, SAMHSA’s level of oversight was not in line with the challenges and complexities that increased the risks associated with Project Liberty. Based in part on its risk assessment process, SAMHSA’s oversight of Project Liberty was primarily carried out by its programmatic staff who focused on activities that did not directly link to the financial information being reported by NYS OMH. Without useful financial information, including updated budgets, and without analyses of the financial information Project Liberty was reporting, SAMHSA was not in a position to exercise a reasonable level of oversight to ensure that grant funds were effectively used to address the needs of those affected by the September 11 terrorist attacks. Both the state of New York and the federal government have taken steps to assess how Project Liberty delivered services. NYS OMH is conducting its own assessment of Project Liberty and partnered with NYAM to obtain information from telephone surveys. SAMHSA contracted with NCPTSD, a center within the Department of Veterans Affairs, to conduct a case study of New York’s response to the terrorist attacks on September 11, with a primary focus on Project Liberty. Both NYS OMH and NCPTSD’s overall assessments of the program were ongoing as of March 2005. FEMA plans to consider lessons learned from NYS OMH and NCPTSD when conducting its own internal review of the CCP. NYS OMH is conducting an evaluation of Project Liberty, which includes designated funding for program evaluation. This nonstatistical evaluation consists of several components, including analysis of data collected by service providers documenting services delivered through encounter data forms, recipient feedback through written questionnaires and telephone surveys, provider feedback through written reports and staff surveys, and other initiatives. The data collected by service providers was the primary method used to assess the services delivered by the program. Based on these data, NYS OMH preliminarily found that Project Liberty had reached a large number of people affected by the September 11 terrorist attacks and that it was successful in reaching many diverse communities. NYS OMH reported that 95 percent of providers who responded to its surveys rated the overall quality of services provided as good or excellent. NYS OMH also reported that the majority of respondents to its recipient surveys indicated that they have returned to their predisaster mental health condition, a goal of Project Liberty. However, according to NYS OMH, the recipient surveys were made available beginning in July 2003 to organizations providing crisis counseling for distribution to individuals receiving services and therefore may not be representative of all Project Liberty recipients. NYS OMH did not report the number of providers who received surveys and reported a low response rate for recipients. Because the number of surveys offered to providers was not disclosed and because of low response rates for the recipient surveys, we were unable to determine the level of coverage provided by these surveys. NYS OMH also partnered with NYAM, a not-for-profit organization dedicated to enhancing public health through research, education, public policy, and advocacy. NYAM conducted nonstatistical telephone surveys in 2001 and 2002 of New Yorkers to assess the magnitude and duration of the mental health effects of the terrorist attacks. NYS OMH worked with NYAM to assess the reach and recognition of Project Liberty by adding questions to NYAM's ongoing September 11 telephone surveys. NYAM and NYS OMH reported that 24 percent of the respondents interviewed were aware of Project Liberty and, among respondents who had heard of the program, 67 percent had a good impression of the program. However, because the sampling methodology for the NYAM phone surveys was not disclosed and because of low response rates, we were unable to determine the survey coverage. Based on these evaluation activities, as well as their own experience with Project Liberty, NYS OMH officials have begun to identify lessons to be learned. For example, they found that emergency mental health plans and resources in place prior to September 11 were insufficient to fully respond to the mental health impact of the terrorist attacks. Much of the infrastructure needed to implement Project Liberty, such as data collection procedures and public education materials, had to be developed in the immediate aftermath of the terrorist attacks. In addition, NYS OMH found that the services covered by the CCP were not sufficient to meet the mental health needs of the minority of individuals who developed severe and persistent symptoms that substantially interfered with day-to-day functioning. Although the state of New York was given permission to develop and implement an enhanced services program to meet the needs of the more severely affected individuals, similar intensive interventions are not currently routinely included as part of the FEMA CCP. NYS OMH officials told us that when their evaluation is completed, they expect that they will have comprehensively identified best practices and obstacles encountered and that they will make recommendations to FEMA and SAMHSA for actions needed to better organize a mental health response to future disasters funded by the CCP. SAMHSA entered into an interagency agreement with NCPTSD, a center within the Department of Veterans Affairs, to conduct a case study of New York’s response to the September 11 terrorist attacks. The primary purpose of the NCPTSD case study was to identify lessons to be learned from New York’s experience that could be useful to other communities that might have to respond to major disasters, including acts of terrorism. As part of its study, NCPTSD interviewed 103 individuals, including service providers and management from 50 public and private provider organizations in New York City and the surrounding counties. NCPTSD used a qualitative methodology to analyze the data to develop findings and recommendations. According to SAMHSA, the NCPTSD report is expected to be issued as soon as all stakeholders’ comments have been received and considered. FEMA officials told us they plan to consider lessons learned from the NCPTSD and NYS OMH assessments of Project Liberty through FEMA’s internal review of the CCP that was ongoing as of March 2005. This internal review is being conducted in partnership with SAMHSA and, according to FEMA, will consider whether aspects of FEMA’s CCP, including its regulations and guidance, need to be improved. For example, FEMA plans to work with SAMHSA to consider the extent to which the enhanced services should be included as a permanent part of the CCP. FEMA officials told us that the internal review will have to be completed in conjunction with their primary work responding to disasters; therefore, they have not established a timetable to complete this review. Given that Project Liberty was awarded the largest RSP grant in the history of FEMA’s CCP and that FEMA provided funding to the state of New York to evaluate Project Liberty, the timely assessment of lessons learned from this program would be beneficial to future CCPs. FEMA and SAMHSA’s limited oversight of the planned and actual spending of Project Liberty impeded their ability to monitor whether the grant funds were being used in the most efficient and effective way to meet the needs of those affected by the terrorist attacks of September 11, 2001. Further, until recently, FEMA and SAMHSA had limited involvement in efforts to resolve issues surrounding the outstanding NYC DOEd expense claims; additional oversight in this area could help bring appropriate and timely resolution to these issues. FEMA will have an opportunity to address these oversight issues, as well as lessons learned identified by NYS OMH and NCPTSD, as part of its ongoing internal review of its CCP. In order to address the issues identified in our report, we recommend that the Secretary of Homeland Security direct the Under Secretary of Emergency Preparedness and Response to take the following eight actions: To help ensure proper and timely expenditure of the remaining Project Liberty funds, FEMA should work with SAMHSA to provide assistance to New York City and State officials in appropriately resolving issues surrounding the NYC DOEd expense reimbursements and determine whether an independent review of the propriety of the use of funds for payments to the NYC DOEd is needed. To strengthen federal financial oversight of future CCP grants, FEMA should work with SAMHSA to require the recipients of CCP grants to submit updated budgets to reflect approved changes to the program; revise the current risk assessment process to comprehensively identify and assess risks associated with CCP grants; establish a process to update the risk assessment for significant consider developing formal requirements for consistent classifications of expense data; and develop formal procedures to perform more detailed analyses of financial reports, including comparing actual expenditures and budgets to identify variations and obtain an understanding of the reasons for any unusual variations. To help ensure that the lessons learned from Project Liberty will be used to help improve future programs funded by the CCP, FEMA should establish a clear time frame to complete its internal review of the CCP as expeditiously as possible. We received written comments on a draft of this report from DHS who generally concurred with our recommendations but expressed reservations regarding our assessment of the adequacy of FEMA and SAMHSA oversight. SAMHSA in a separate comment letter to FEMA, did not object to our recommendations but did take issue with our assessment of its oversight, particularly given the unprecedented circumstances that led to the establishment of Project Liberty. SAMHSA also provided additional information on the NYC DOEd claim issue. DHS’s comment letter (reprinted in appendix II) incorporates by reference SAMHSA’s letter (reprinted in appendix III). We also received technical comments from NYS OMH, NYC DOHMH, and NYC DOEd on excerpts of the report, which we incorporated as appropriate. DHS stated that our report should give more weight to the unprecedented conditions that led to Project Liberty, and that it was these unique circumstances that led to our findings and were the basis for our recommendations. It further stated that our recommendations primarily relate to the use of grant funds by NYC DOEd, and that no similar issues were identified with respect to the use of funds by other program subgrantees. Our report clearly acknowledges the unique and unprecedented circumstances that led to the establishment of Project Liberty. These unique circumstances were largely the basis for our conclusion that Project Liberty required a high level of federal oversight. A number of red flags signaled the need for a heightened federal role, including: the RSP grant was the largest such grant ever made by FEMA; the program initially designed to last about a year is now over 3 ½ years old and still ongoing, with an extension being considered to September 30, 2005; and reimbursements for approximately $32 million, representing over 20 percent of the total federal funds awarded, remained unresolved as of May 2005. The fact that the level of federal oversight was not commensurate with the unprecedented circumstances surrounding Project Liberty was what led us to our findings and recommendations in this area. Five of our eight recommendations relate to strengthening federal financial oversight of future CCP grants; two of the recommendations specifically address the NYC DOEd use of grant funds; and the remaining recommendation calls for ensuring that lessons learned from Project Liberty will be used to improve future programs funded by the CCP. Thus, DHS was incorrect in stating that our recommendations primarily relate to the NYC DOEd issues. As to FEMA’s statement that we did not identify any other issues about the use of grant funds, the scope of our work included determining the extent to which Project Liberty expended grant funds and whether the federal government had adequate financial oversight of Project Liberty. Our work did not address whether payments made, including those made by NYC DOEd, were a valid use of federal resources. We have no basis for reaching the conclusion suggested by FEMA. SAMHSA’s letter also discussed the unprecedented conditions surrounding Project Liberty and, as discussed below, strongly disagreed with our assessment that SAMHSA’s financial oversight was limited. SAMHSA also stated that during a recent site visit, NYC DOEd’s Chief Financial Officer indicated that documentation was available to support claims as necessary. SAMHSA further stated that it will be recommending that the NYS OMH conduct an independent audit of these claims as one of the conditions of approving an extension of the grant to September 30, 2005. During our fieldwork, we were consistently told by NYC DOHMH officials that NYC DOEd had not been able to produce documentation for the majority of expenses it incurred on behalf of Project Liberty that met the documentation standards for reimbursement under federal grants. Given ongoing questions about the existence of documentation supporting NYC DOEd claims, the sufficiency of this documentation, or both, we agree that an independent audit is appropriate. SAMHSA stated that overall, the federal oversight of Project Liberty was appropriate, reasonable, and responsive to state and local needs. SAMHSA outlined several factors to support this statement, including that it received financial data from the state of New York on a routine basis and monitored the allocability, allowability, and reasonableness of project expenditures. While SAMHSA acknowledged that project budget data were not updated comprehensively, it stated that it did request and receive updated budget information in several instances, particularly in conjunction with project extensions. SAMHSA further stated that we had not cited any problems with program expenditures but instead seemed to focus on differences between classification of budgeted and reported expenditures. SAMHSA acknowledged that the budget was not prepared in the same format as reported expenditures, and stated that inconsistent categorization of expense accounts were largely the reason for the classification discrepancies we highlighted in our report. Our conclusion that SAMHSA’s oversight of Project Liberty’s financial information was limited was based in large part on the fact that the SAMHSA did not have a basis to reliably monitor how Project Liberty was using federal funds, since, as SAMHSA acknowledged, it did not have updated budget information and the reported expenditure data were not accurate due to classification discrepancies. At the time of our review, SAMHSA was not aware of these discrepancies because it had not been conducting basic analyses, such as comparisons between Project Liberty’s budgeted and reported expenses. Further, there were no staff with financial backgrounds involved with the oversight of Project Liberty expenditures. SAMHSA’s limited oversight was based in part on the fact that it did not deem the project high risk from a financial standpoint, despite the complex federal, state, and local environment, the fact that this was by far the largest RSP grant ever awarded by FEMA, the size and diversity of the community being served, and the overall challenging and changing circumstances of September 11. Overall, we found that SAMHSA’s level of oversight was not in line with these challenges and complexities associated with Project Liberty. SAMHSA went on to state that in its opinion, classification of Project Liverty as high risk simply based on total estimated project expenditures would be inappropriate. It further noted that there is no regulatory mechanism allowing SAMHSA to assess risk on a complex federal, state, and local government environment as was specified in our report. We did not suggest that the risk classification should simply be based on total estimated project expenditures. As discussed above, our report clearly delineates a number of different risk factors that should have been considered in the risk classification. Further, we disagree that the current regulatory mechanism would not allow SAMHSA to consider these risk factors in making its risk assessment of Project Liberty. While current regulations do not require SAMHSA to consider programmatic factors in its risk assessment, they do not prevent SAMHSA from considering risk factors other than those delineated in its regulations in its overall assessment of the program and its operations. As noted by SAMHSA in its written comments, Project Liberty was by far the largest and most complex effort in the 30-year history of the CCP and presented unique and unprecedented challenges for government authorities at all levels. We believe these should have been key factors in SAMHSA’s risk assessment and should have triggered heightened financial oversight of Project Liberty. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to appropriate House and Senate committees; the Secretary of Homeland Security; the Under Secretary of Homeland Security for Emergency Preparedness and Response; the Administrator, Substance Abuse and Mental Health Services Administration; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8341 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. To determine the extent to which Project Liberty spent the Immediate Services Program (ISP) and Regular Services Program (RSP) grant funds received from the Federal Emergency Management Agency (FEMA), we did the following: Reviewed various documents, including quarterly RSP expenditure reports for the first (June 15, 2002, through September 14, 2002) through the ninth (June 15, 2004, through September 30, 2004) quarters; a detailed listing of the outstanding advance balances as of September 30, 2004, obtained from the Research Foundation for Mental Hygiene, Inc. (RFMH); a summary of expense claims submitted by the New York City Department of Education (NYC DOEd) as of March 2005; internal control summaries prepared by NYC DOEd for its personnel and other- than-personnel expenses; a draft internal control summary prepared by NYC DOEd for its community-based expenses; Crisis Counseling Assistance and Training Program (CCP) guidance on appropriate uses of grant funds; and FEMA and Department of Health and Human Services (HHS) regulations pertaining to the CCP. Interviewed officials from FEMA’s headquarters and finance office, the Substance Abuse and Mental Health Services Administration’s (SAMHSA) Center for Mental Health Services (CMHS), the New York State Office of Mental Health (NYS OMH), RFMH, the New York City Department of Health and Mental Hygiene (NYC DOHMH), and NYC DOEd. Determined that the total expenditures data obtained from RFMH and Project Liberty’s quarterly expenditure reports were sufficiently reliable for the purposes of this report by the following: Obtaining and reviewing a copy of the independent auditor’s report of RFMH’s financial statements for fiscal years ending March 31, 2004 and 2003, and Report on Compliance and on Internal Control Over Financial Reporting Based on an Audit of Financial Statements Performed in Accordance with Government Auditing Standards as of March 31, 2004. We determined that RFMH received a clean opinion on its fiscal year 2004 and 2003 financial statements. In addition, the auditor concluded that its tests of RFMH’s compliance with certain provisions of laws, regulations, contracts, and grants did not disclose any instances of noncompliance that are required to be reported under Government Auditing Standards. Finally, the auditor’s consideration of internal control over RFMH’s financial reporting did not identify any matters involving the internal control over financial reporting and its operation that it considered to be material weaknesses. Analyzing a database obtained from RFMH of the payments made on behalf of Project Liberty for the ISP and RSP from the first payment made on September 25, 2001, through September 30, 2004, including advances made to service providers. Determining that the amount of the payments included in the database was consistent with the total reported expenditures in the ISP final report and the RSP quarterly expenditure reports that were prepared by Project Liberty and submitted to SAMHSA and FEMA. Comparing the Project Liberty expenditures as reported by RFMH to drawdowns reported by SAMHSA on Project Liberty’s RSP grant award. Obtaining a written certification of data completeness from the Managing Director of RFMH that the expenditures reported in the database were complete and accurate for all payments made for, or on behalf of, Project Liberty for the ISP and the RSP through September 30, 2004. Reviewing Single Audit Act reports for fiscal years 2003 and 2002 of New York City and surrounding counties. To determine whether the federal government had an effective process in place to determine the amount of funds to provide to Project Liberty, we interviewed officials from FEMA headquarters, SAMHSA’s CMHS, NYS OMH, and RFMH; reviewed various documents, including the state of New York’s ISP and RSP grant applications, the ISP and RSP grant awards, and federal guidance for the CCP, including the Robert T. Stafford Disaster Relief and Emergency Assistance Act and FEMA and HHS regulations; and reviewed correspondence between officials from FEMA, SAMHSA’s CMHS, NYS OMH, and RFMH. To assess federal oversight over Project Liberty’s expenditures, we obtained an understanding of CCP oversight roles and responsibilities by reviewing FEMA and HHS regulations, FEMA and SAMHSA’s fiscal year 2004 interagency agreement, CCP fiscal guidelines, HHS’s grants management manual, summary documents of the CCP’s oversight structure prepared by SAMHSA, and GAO reports; reviewed available documentation of oversight performed for Project Liberty, including Project Liberty’s financial reports and documentation of site visits conducted by FEMA and SAMHSA; analyzed Project Liberty’s financial reports and compared them to initial designed our work to assess the effectiveness of federal oversight and therefore considered but did not assess the controls over Project Liberty payments implemented at the state and local levels; interviewed officials from FEMA headquarters and the FEMA regional office that serves New York, SAMHSA’s CMHS, SAMHSA’s Division of Grants Management, NYS OMH, and RFMH to identify policies and procedures for overseeing Project Liberty; and reviewed and used Standards for Internal Control in the Federal Government as criteria. To identify the steps that have been taken by the federal government in partnership with the state of New York to assess Project Liberty, we reviewed documentation of assessments performed, including a draft of the National Center for Post-Traumatic Stress Disorder case study of Project Liberty, NYS OMH summaries of survey results, an article written by the Deputy Commissioner of NYS OMH on lessons learned about the mental health consequences of the September 11 terrorist attacks, articles published by the New York Academy of Medicine, and documentation from FEMA related to its internal review of the CCP; reviewed various documents related to Project Liberty including the grant applications and the response to conditions of the grant award set out by FEMA and SAMHSA; reviewed GAO and FEMA Office of Inspector General reports to determine whether the CCP was evaluated; and interviewed officials from FEMA headquarters, SAMHSA’s CMHS, and NYS OMH. We requested written comments on a draft of this report from the Secretary of Homeland Security. We received written comments from DHS. The DHS comments (reprinted in app. II) incorporate by reference a letter from SAMHSA to FEMA commenting on the draft (reprinted in app. III). We also provided excerpts of a draft for technical comment to NYS OMH, NYC DOHMH, and NYC DOEd. NYS OMH technical comments and the coordinated NYC DOHMH and NYC DOEd technical comments are incorporated as appropriate. We performed our work from July 2004 through March 2005 in accordance with generally accepted government auditing standards. New York State Office of Mental Health Research Foundation for Mental Hygiene, Inc. Linda Calbom, (202) 512-8341. Robert Owens (Assistant Director), Donald Neff (Auditor-in-Charge), Lisa Crye, Edward Tanaka, and Brooke Whittaker made key contributions to this report.
To help alleviate the psychological distress caused by the September 11, 2001, attacks the Federal Emergency Management Agency (FEMA) awarded the state of New York two grants totaling $154.9 million to provide crisis counseling and public education. Because of questions about whether the program, called Project Liberty, had spent all the funds it received from the federal government, GAO was asked to determine (1) the extent to which the program expended the funds awarded from the federal government, (2) whether the federal government had an effective process in place to determine the amount of funds to provide the program, (3) whether the federal government had adequate financial oversight of the program, and (4) steps taken by the federal government and New York State to assess the program's effectiveness. For the period September 11, 2001, through September 30, 2004, Project Liberty reported that it had expended approximately $121 million, or three-quarters of the $154.9 million in grants awarded by FEMA, leaving a remaining balance of $33.9 million. The majority of the remaining balance, approximately $32 million, related to unresolved issues involving the adequacy of supporting documentation for the New York City Department of Education's (NYC DOEd) expense claims. As of March 31, 2005, city and state officials told GAO they had accepted alternative forms of supporting evidence related to $5.2 million in NYC DOEd expenses; however, this alternative evidence provides only limited assurance of the propriety of the claimed amounts. It is unclear whether similar alternative sources of evidence will be accepted for the remaining $26.8 million in NYC DOEd expense claims. FEMA assisted state officials in developing estimated funding needs for Project Liberty immediately after the terrorist attacks. By necessity, these initial budgets were developed using estimates established during the initial stages of the disaster. However, FEMA never required Project Liberty to prepare adjusted budgets to reflect new information or subsequent changes to the program. As a result, FEMA did not have realistic budget information to assess how city and state officials were planning to spend Project Liberty grant funds. FEMA assigned primary responsibility for oversight and monitoring to the Substance Abuse and Mental Health Services Administration (SAMHSA) through an interagency agreement. Although SAMHSA had procedures in place to monitor Project Liberty's delivery of services, it performed only limited monitoring of financial information reported by Project Liberty about the cost of those services. For example, while SAMHSA received periodic financial reports from Project Liberty, it did not perform basic analyses of expenditures in order to obtain a specific understanding of how the grant funds were being used and, as noted above, did not have updated budget information to gauge how actual spending compared to budgets. As a result, SAMHSA was not in a position to exercise a reasonable level of oversight to ensure that funds were being used efficiently and effectively in addressing the needs of those affected by the September 11 attacks. Both the state of New York and the federal government have taken steps to assess how Project Liberty delivered services. These assessments were ongoing as of March 2005. FEMA plans to consider lessons learned from Project Liberty when conducting its own internal review of the crisis counseling program.
The United States extends unilateral tariff reductions to over 130 developing countries through one general trade preference program (GSP) and three regional programs—CBI, ATPA, and AGOA (see table 1). The preference programs are tools that the U.S. government uses to assist countries in the developing world. At the United Nations Conference on Trade and Development (UNCTAD) in 1964, developing countries asserted that one of the major impediments to their accelerated economic growth and development was their inability to compete with developed countries in the international trading system; the developing countries argued that preferential tariffs would allow them to increase exports and foreign exchange earnings necessary to diversify their economies and reduce dependence on foreign aid. The rationale for trade preferences was that poorer countries need to develop industrial capacity for manufacturing in order to move away from dependence on imports and production of traditional commodities that could be subject to declining prices in the long term. It was argued that poorer countries also needed time to retain some protection to develop their “infant industries,” but that increases in exports would be necessary to help countries capture economies of scale in production and earn foreign exchange. In addition, it was evident that some provision for the elimination of preferences once the industries were firmly established was necessary. The argument was that trade preferences should be temporary, introduced for a period of no less than 10 years with respect to any given industry in any developing country. At the end of the 10-year period, preferences would be withdrawn unless it could be shown that special circumstances warranted their continuation. At the second UNCTAD conference in New Delhi in 1968, the United States joined other participants in supporting a resolution to establish a mutually acceptable system of preferences. In order to permit the implementation of the generalized preferences, in June 1971 the developed countries, including the United States, were granted a 10-year waiver from their obligations under the global trading system, now embodied in the World Trade Organization (WTO), to trade on a most favored nation (MFN) basis. Following the granting of this waiver, developed countries created their GSP programs, and Congress enacted the U.S. GSP program in January 1975. The United States maintained that GSP was a temporary program to advance trade liberalization in the developing world, but it recognized the need to address the legal basis for granting these preferences, in anticipation of the expiration of the waiver in 1981. An agreement was reached at the 1979 conclusion of the Tokyo Round of Multilateral Trade Negotiations, known as the “Enabling Clause,” which has no expiration date and replaces the waiver. Because the Enabling Clause applies to preference regimes that are “generalized, non-reciprocal, and non- discrimatory,” separate waivers have been sought for U.S. regional preference programs. The GSP program seeks to accelerate economic growth and development in developing countries by providing access to the U.S. market. GSP establishes a basic level of product coverage common to all the preference programs. Over the years, Congress has also enacted a series of regional trade preference programs that have evolved to address U.S. foreign policy objectives beyond the shared general objective of promoting economic development. The regional programs expand on GSP to cover additional products that are not covered by GSP, including some apparel, footwear, and certain leather-related products. While regional programs may generally have more liberal conditions for product entry than GSP, these differences are more likely to affect products for which countries cannot receive GSP benefits (e.g., textiles and apparel). CBI was created to promote economic and political stability in the Central America and Caribbean region, to diversify exports, and to expand trade between those countries and the United States. ATPA was established to combat drug production and trafficking by providing sustainable economic alternatives to beneficiary countries in the Andean region of South America. AGOA was set up to facilitate Sub-Saharan Africa’s integration into the global economy. The regional preference programs have some eligibility criteria that overlap with GSP, but the regional programs also set forth additional eligibility criteria that are not part of the GSP statute. In order to be eligible for AGOA, a country must also be eligible for GSP. In addition, all preference programs contain certain common eligibility requirements, such as having national policies to ensure workers’ rights and protect intellectual property. Regional program beneficiary countries are subject to more extensive eligibility criteria than GSP beneficiary countries. For example, ATPA requires cooperation with U.S. counternarcotics and antiterrorism efforts, and AGOA requires that countries be making progress toward political pluralism and not commit gross violations of human rights. Eight agencies have key roles in administering U.S. trade preference programs. Led by USTR, they include the Departments of Agriculture, Commerce, Homeland Security, Labor, State, and Treasury, as well as ITC. USTR utilizes an interagency mechanism, the TPSC, and its associated subcommittees to consult and coordinate with these and other agencies such as USAID. This year, ATPA, CBTPA, and GSP expire, and Congress will need to explore the option of renewing these programs. At the same time, legislative proposals to provide additional, targeted benefits for the poorest countries are pending. In addition to examining the benefits trade preference programs provide, Congress will need to consider concerns by beneficiary and other developing countries, industry groups, and economic experts surrounding these programs. Such concerns include the potential for diversion of trade from other countries that these programs can cause; the complexity, scope of coverage, certainty, and conditionality of these programs; and the potential opposition to multilateral and bilateral import liberalization that preference programs can create. The overall effects of trade preference programs on the U.S. economy are small, but preference programs have direct effects on U.S. businesses, consumers, and the federal budget. Effects on U.S. industries and individual businesses vary; some have shared-production arrangements with preference beneficiaries, while a few U.S. industries compete with imports benefiting from preferences. U.S. consumers have benefited from lower prices resulting from duty-free imports under trade preference programs, while tariff revenues to the U.S. Treasury have been lower because of foregone tariff revenues. In addition, preference programs serve as a tool to advance U.S. foreign policy objectives. Imports under preference programs represent a small share of total U.S. imports. As shown in table 2, U.S. preference imports across all programs accounted for about 5 percent of U.S. imports in 2006. In general, studies of the effects of preference programs on the U.S. economy find that the overall impact is small. For example, the ITC consistently finds in its biennial reports on ATPA and CBI that the impact of imports from these programs on the U.S. economy is minor. In the most recent ITC reports on ATPA and CBI, ITC reported again that the overall effect of imports from these programs on the U.S. economy continued to be negligible, representing only 0.09 percent and 0.10 percent, respectively, of the U.S. gross domestic product in 2005. Similarly, in January 2008, the Congressional Research Service concluded that the overall effects of GSP on the U.S. economy are relatively small and that the rate of increase of imports entering under GSP in the past 10 years is relatively flat, indicating that there may be little impact on the U.S. market as a whole by extending these preferences. Some U.S. industries and individual businesses have shared-production arrangements with foreign producers that depend heavily on duty-free preference benefits. Over the last two decades, U.S. producers of apparel have come to rely on “outward processing arrangements.” In such arrangements, U.S. factories focus on relatively capital-intensive operations, such as fabric production. Fabrics and components are then shipped to CBI, ATPA, or AGOA countries, where factories conduct the relatively labor-intensive business of assembling the finished garments. In addition, U.S. manufacturers and importers benefit from the lower cost of consumer goods and raw materials imported under preference programs, such as jewelry, leather, and aluminum imported through GSP. Furthermore, U.S. manufacturers also rely on and benefit from intermediate goods from preference beneficiary countries. For example, Brazil is a major user of GSP. In 2006, 10 percent of all nonfuel imports to the United States from all preference programs came from Brazil. Much of what Brazil ships to the United States under GSP are intermediate goods produced by U.S.-affiliated multinational companies. Once exported to the United States, these goods are further processed or incorporated into U.S.- manufactured goods such as cars and power generators. Given the importance of these intermediate goods to domestic manufacturers, the Congressional Research Service reported that an expiration or modification of GSP would directly affect them, at least in the short term. Smaller U.S. businesses that regularly import inputs under a preference program may be especially affected by a lapse or expiration of the program because they rely on GSP’s duty savings to compete with much larger companies, and they are less able to adjust to increased costs. A wide range of U.S. companies submitted official comments to USTR on several countries during an overall review of GSP in 2006. For example, concerning GSP imports from Thailand, U.S. companies’ comments were overwhelmingly positive and supported continued preferential treatment for imports that included items such as jewelry, bottle-grade polyethylene terephthalate resin, motor vehicle tires, microwave ovens, ophthalmic lenses, televisions, cookware, golf equipment, and tuna. On the other hand, certain other U.S. industries compete with imports benefiting from preferences. For example, ITC estimates that U.S. methanol producers may have experienced displacement of between 5.2 percent and 10.1 percent of production, valued at $27.6 million to $54.2 million in 2006, because of methanol imports from CBI countries. ITC also found that U.S. asparagus, fresh cut roses, chrysanthemums, carnations, and anthuriums may have experienced displacement of more than 5 percent of the value of production in 2005 because of imports that receive ATPA preferences. However, product coverage of the preference programs is dynamic, based on statutory provisions. Based on thresholds added by the legislation passed by Congress in December 2006 when it extended the GSP program, the President removed GSP duty-free treatment for methanol from Venezuela. U.S. consumers benefit to the extent that tariff savings result in lower prices on final products, as well as from the lower costs of intermediate goods. U.S. importers of goods who import duty-free components, parts, or materials under GSP maintain that the preference results in lower costs for these intermediate goods that, in turn, can be passed on to consumers. In a May 1, 2006, letter to the House Ways and Means and Senate Finance committees, a coalition of importers and retailers stated that if GSP were allowed to expire or its benefits were reduced, it “would impose a costly hardship on not only beneficiary countries but their American customers as well.” As part of biennial reviews of CBI and ATPA, ITC assessed the effects of these programs on the U.S. economy, industries, and consumers. Following are illustrative (not comprehensive) single-year examples extracted from the most recent ITC reports on CBI and ATPA, highlighting products where U.S. consumers benefited: ITC found that, in 2006, knitted cotton T-shirts provided the largest gain in consumer surplus ($63.7 million to $68.5 million) resulting exclusively from CBI tariff preferences. The price U.S. consumers would have paid for imports of such T-shirts from CBI countries would have been 12 percent higher without CBI. Men’s and boys’ woven cotton trousers or shorts provided the second-largest gain in consumer surplus ($56.7 million to $62.3 million). Without CBI, the import price of such woven cotton trousers or shorts from CBI countries would have been 15 percent higher. ITC found that, in 2005, men’s or boys’ knitted shirts provided the largest gain in consumer surplus ($30 million to $34 million) from lower prices and higher consumption resulting exclusively from ATPA tariff preferences. In December 2006, the Congressional Budget Office (CBO) issued cost estimates associated with the extension of GSP, ATPA, and AGOA and the enactment of HOPE under the Tax Relief and Health Care Act of 2006, including the loss of tariff revenues that would otherwise accrue to the U.S. Treasury. In the multiyear review, CBO came to the following conclusions: Changes to the GSP program will result in an estimated reduction in revenues of $297 million in 2007 and of $992 million over the 2007 to 2009 period. This estimated reduction of revenue is due to the extension of GSP to December 31, 2008, and the new provisions concerning competitive need limit waivers. In addition, CBO estimated in its “Budget Outlook” for fiscal years 2007 to 2016 that revenue losses would amount to about $3.1 billion if GSP were extended to 2011. The extension of ATPA to June 30, 2007, was estimated to result in a decrease in revenues of $25 million in 2007. The most recent ATPA extension to December 31, 2008, will result in $119 million in reduced revenues in 2008 and 2009, according to a February 2008 CBO cost estimate. AGOA will result in an estimated reduction in revenues of about $2 million in 2007, $127 million over the 2007 to 2011 period, and $180 million over the 2007 to 2016 period. The enactment of HOPE will result in an estimated reduction of $4 million in 2007, and $28 million over the 2007 to 2011 period. Without econometric analysis, it may be difficult to determine whether, absent preferences, the same volume of goods would still be exported to the United States. If no or a reduced volume of exports occurs without the preferences, less tariff revenue would be foregone. Preference programs have been used to advance U.S. foreign policy goals in areas such as intellectual property protection, labor, and human rights, as well as on broader market-oriented and democratic governance reforms. Some supporters of GSP and other nonreciprocal preferences believe that the country practice criteria that developing countries must meet if they are to qualify for GSP provide the United States with political leverage that can be used to support U.S. foreign and commercial interests. Periodic and petition-initiated reviews under the programs provide the United States the opportunity to engage with governments and motivate policy change. As we noted in our previous report, these reviews serve to encourage beneficiary countries to comply with country eligibility criteria, such as the extent to which the country is providing adequate and effective protection of intellectual property rights (IPR), taking steps to afford internationally recognized worker rights, and implementing its commitments to eliminate the worst forms of child labor. For example, GSP has annual reviews of country and product eligibility, based on petitions (requests) filed with USTR concerning GSP beneficiary countries and products by U.S. industry groups, governments, and nongovernmental organizations (NGOs) such as labor unions. According to USTR, the United States works with a beneficiary country during a country practice review before removing it from eligibility. Our review of agency records and meetings with officials and interest groups indicate that the leverage associated with preferences creates an opportunity to secure improvements in IPR and labor protections. Regional trade preference programs also serve important foreign policy interests. For example, ATPA complements counternarcotics efforts by providing opportunities for legal crops to be exported to the U.S. market, thus encouraging farmers to shift away from coca and heroin poppy production. Similar to GSP, ATPA also has an annual review of country eligibility practices, based on petitions filed against beneficiary countries by the public; this review has not resulted in the withdrawal or suspension of benefits from any ATPA country. In assessing the effects of trade preferences on beneficiary country development, economists note that preferences are just one element of a complex economic development process and that isolating their direct impact is difficult. However, there is fairly wide agreement among economists that expanding trade promotes growth and development. If trade preferences lead to increased exports, and export earnings are used to expand industrialization and promote a more diverse economy, then preferences can contribute to the economic development of beneficiary countries. To shed light on the question of whether U.S. trade preference programs are helping countries develop, we look at the fundamental link between the programs and the trade activity of beneficiary countries, focusing on three key elements: (1) the extent and nature of the new opportunities provided under U.S. preference programs, (2) whether countries are fully using the available opportunities, and (3) whether U.S. imports from beneficiaries have grown and diversified. We also report countries’ perspectives on the benefits they derive from U.S. preferences, based on fieldwork. Overall, we find that U.S. trade preference programs have contributed to increased and more diverse trade for many developing country partners. To assess the opportunities provided to beneficiary countries by U.S. preference programs, we examined the scope of programs’ coverage by beneficiary and product, the size of tariff cuts (or margins of preference), and some eligibility conditions that can affect the ability of beneficiaries to access program opportunities. Overall, we found that the opportunities for beneficiaries to export under preferences have expanded, but still have gaps (see detailed data and further discussion in app. III). As detailed in appendix III, product coverage, as measured by tariff lines eligible for duty-free treatment, is extensive for most U.S. preference programs, products, and partners. In 1996, the number of duty-free tariff lines offered under GSP was expanded to provide additional benefits to beneficiary least-developed countries (LDC). Enactment of the regional programs continued this expansion. But, as figure 1 shows, notable gaps remain in tariff lines available for duty-free import under preference programs, particularly in agricultural and textile and apparel products. Moreover, in examining coverage by beneficiary countries’ trade with the United States in 2006, using the ratio of eligible to dutiable imports for each partner, we find wide variation in coverage even within programs. Our analysis finds that: (1) countries eligible for only GSP or GSPLDC have the least coverage of partners’ dutiable imports—approximately 25 percent, (2) regional programs and GSPLDC have much higher coverage of partners’ dutiable imports, and (3) country variations in coverage are wide. For example, 35 GSP or GSPLDC beneficiaries, including Lebanon, Paraguay, Somalia, and Zimbabwe, have high coverage rates, exceeding 75 percent of the value of their dutiable imports. Yet, 54 GSP or GSPLDC beneficiaries such as Bangladesh, Egypt, Pakistan, and Uzbekistan have low coverage rates (less than 25 percent of dutiable imports). The expansion of U.S. program coverage since 1996 appears to have increased the benefit of U.S. preferences by adding some key products under GSP-LDC and the regional programs that otherwise would face relatively high U.S. tariffs. A recent effort to quantify margins of preference (the difference between the preference rate and the otherwise- applicable tariff rate) across all U.S. preference programs, including GSP, by staff economists at ITC and the World Bank finds that preference margins are relatively high for apparel products, as well as certain agricultural goods (melons, cut flowers, frozen orange juice, raw cane sugar, and asparagus); they tend to be relatively low for other products and fairly uniform among programs. Conditions on product entry are also a significant factor affecting opportunities and trade under U.S. preference programs. While the data on coverage and margins of preference suggest a degree of success in improving the benefits of U.S. preference programs, in general, recent assessments of the literature express some skepticism as to whether trade preferences, and GSP in particular, have had more than a very modest impact on the export performance, and hence the development, of eligible countries. In discussing factors that underlie the performance of preference programs, researchers Ozden and Reinhardt, for example, not only indicate that GSP often fails to cover products in which beneficiary countries have the greatest comparative advantage, such as agricultural products, but cite administrative features of the programs—notably, export ceilings and rules of origin—as key constraints on benefits. Nevertheless, conformity with such requirements can be vital to ensuring that benefits flow to the intended country—that is, the designated beneficiary country or countries, rather than countries that are ineligible for preferences. Two specific conditions—“competitive need limitations” and “rules of origin”—illustrate how administrative implementation of statutory provisions, although addressing important policy considerations, may affect the ability of beneficiary countries to fully access the opportunities otherwise offered by U.S. preference programs. GSP places export ceilings, or competitive need limitations (CNL), on eligible products from GSP beneficiaries that exceed specified value and import market share thresholds (LDCs and AGOA beneficiaries are exempt). Rules of origin for U.S. trade preference programs typically specify a minimum percentage value-added to the entering product that must come from the beneficiary country. However, more complex rules apply to some products, notably textiles and apparel. Our fieldwork revealed examples where complex rules-of-origin requirements appear to be complicating preference trade, for example, in Haiti and in Ghana. On the other hand, liberalizing quotas and rules of origin have been the principal means by which the regional programs have been liberalized or made more likely to permit imports in recent years, particularly on apparel products. The effectiveness of trade preference programs in expanding trade is also dependent on beneficiaries’ actual use of the preference opportunities offered. The utilization rate indicates the extent to which beneficiaries are taking advantage of the opportunities offered. Our analysis shown in appendix III finds that U.S. preference programs have fairly high utilization rates, but utilization varies by program and beneficiary. Although utilization of the regional preference programs is higher than utilization of GSP, to some extent, this lower utilization of GSP reflects the fact countries that have access to both GSP and regional programs often opt to use the regional programs. Our analysis of utilization across programs by beneficiary country finds substantial variation. For example, under AGOA, a number of countries, such as Nigeria, Angola, Chad, and Gabon have high utilization rates, but 12 of the 38 AGOA eligible countries did not export under the program. The improved opportunities for market access provided by U.S. preference programs appear to have contributed to the rapid growth in U.S. imports from developing countries in recent years. The total dollar value of U.S. imports from both developed and developing countries has steadily grown since 1992, but developing countries have witnessed much faster growth since 2000. The developing countries’ share of total U.S. imports has increased, while the developed (high-income) countries’ share has declined. The overall gains by developing countries are mostly attributable to middle-income developing countries. The share of low-income and LDCs remains small. Turning to preference imports specifically, we also find that preference programs have generally contributed to the increasing shares of developing countries in U.S. imports, particularly imports from low-income developing countries. However, imports under U.S. preference programs only accounted for about 5 percent of total U.S. imports in 2006. Total U.S. preference imports grew from $20 billion in 1992 to $92 billion in 2006. Most of this growth in U.S. imports from preference countries has taken place since 2000, when preference imports grew faster than overall U.S. imports. Whereas total U.S. preference imports grew at an annual rate of 0.5 percent from 1992 to 1996, the growth quickened to an annual rate of 8 percent from 1996 to 2000, and 19 percent since 2000, which also suggests an expansionary effect of program changes that increased product coverage and liberalized rules of origin for LDCs under GSP in 1996 and African countries under AGOA in 2000. While U.S. preference imports remain concentrated in a few countries, overall the poorer countries’ share of preference imports has risen recently. As can be seen from figure 2, the top 5 suppliers under preference programs in 2006 accounted for 58 percent of preference imports, and the top 10 suppliers accounted for 77 percent of preference imports. Among the top 10 suppliers, two countries—Nigeria and India—are low-income, and six countries—Angola, Ecuador, Colombia, Thailand, Peru, and the Dominican Republic—are lower middle-income countries. The top 25 preference beneficiaries accounted for over 95 percent of U.S. preference imports. Nevertheless, as figure 3 shows, the poorest countries have been more successful in increasing their shares in total U.S. imports under preferences than they have been in increasing their share of overall U.S. imports. The year 2000 marks the beginning of gains in preference imports for low- income countries and declines in the share of middle-income developing countries. By 2006, imports from low-income countries had risen to 38 percent of U.S. preference imports. Within the middle-income grouping, the share of upper middle-income countries has generally declined since 1992, while that of lower middle-income countries rose, then moderated; in 1996, lower middle income countries share surpassed that of the upper middle income countries. The share of U.S. preference imports from the least- developed countries was 17 percent in 2006, versus nearly zero until 1996— the year of major revisions in GSP. While our analysis shows that the LDC’s share of U.S. preference imports has risen, the extent of their trade and reliance on preferences (as measured by the share of preference imports in total imports) varies considerably. Three LDCs—all oil exporters—rank among the leading suppliers of total imports into the United States under preference programs (Angola, Chad, and Equatorial Guinea) as shown in table 3. Other LDC exporters to the United States, such as Lesotho, Madagascar, and Haiti are also extensive users of preference programs and have the opportunity to export apparel under AGOA or an expanded CBI. In contrast, several of the top 10 LDC exporters such as Bangladesh, Cambodia, Liberia, Niger, Nepal, and Guinea do not have the opportunity to export textiles and apparel under GSP and do not rely on preferences to support their exports to the United States. Overall, 34 of the 46 eligible LDCs barely used preference programs for their exports to the United States. The growth in imports from developing countries is accompanied by significant changes in the product mix of U.S. imports from preference- eligible countries. Notably, the rapid rise in fuel imports since 1996 is the defining feature of U.S. imports under preference programs. Fuels were less than 1 percent of U.S. imports from preference countries in 1996 but, in 2006, account for nearly 60 percent of U.S. preference imports from preference-eligible countries. Figure 4 also highlights the importance of apparel in the growth of U.S. preference imports up to 2005. After the phase out of global quotas on textiles and apparel in 2005 and the entry into force of the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR) for several CBI nations during 2006, however, these imports under preference programs declined somewhat. In 2006, fuels comprised 94 percent of all imports under AGOA, nearly 70 percent of ATPA/ATPDEA imports, but only 27 percent each of GSP and CBI/CBTPA imports. Apparel imports represent about 6 percent of total preference imports but represent over 30 percent of U.S. imports under CBI, 10 percent of ATPA imports, and just 3 percent of AGOA imports (see app. V). Figure 5 further breaks down trends in nonfuel, nonapparel imports under preference programs. Notably, after 1993, when the North American Free Trade Agreement (NAFTA) was implemented, Mexico lost GSP eligibility, and global agreements to eliminate tariffs in certain sectors such as electronics and information technology were effectuated by the United States, imports under preferences of machinery and electronics— initially the largest product category—declined, but increased somewhat after 2000. Four product areas show increases. The year 2000 changes in U.S. preference programs (the implementation of AGOA, CBTPA, and enhancements in ATPA) appear to have contributed to growing imports of agriculture; textiles, leather, and footware; glassware, precious metals and stones, and jewelry; and chemicals, plastic, wood, and paper. An important goal of trade preferences concerns helping developing countries diversify the range of products that they produce and export. Our analysis shows that total U.S. imports from all preference-eligible countries remain quite concentrated when countries are grouped by their preference program eligibility. However, when viewed over time, imports from preference-eligible countries appear to have become somewhat more diversified since 1992. Our analysis of diversification of total U.S. imports from preference-eligible countries is shown in figure 6. Using a widely used measure of trade and commodity concentration, we constructed an index to show a value of 0 when products are extremely concentrated and a value of 1 when products are most diversified. Consequently, a high value of this index indicates a relatively diversified import/export product mix. In figure 6, the relative level of diversification among the programs is indicated by the height of the line, and the change in the level of diversification over time is shown by the trend in the line from 1992 to 2006. Looking first at the diversification level of each program, we see that U.S. imports from those countries that qualify for GSP only, and those that import to the United States under CBI, have the most diverse profile. Conversely, imports from countries eligible for the AGOA, GSP-LDC, and ATPA show a relatively less diverse profile. This finding can be seen as broadly consistent with the concentration of imports under these preference programs in fuels and apparel products. Second, looking at the trend in the diversification index over time, we find that all country groups, except CBI, which already was the most diversified, show a modest increase in diversification over time. The highest rate of increase in diversification (as measured by the rate of increase of the lines in fig. 6), is noticeable for imports from countries eligible only for GSP. AGOA countries, which are the least diversified, have shown relatively little change over time. It is also important to note that determination of diversification at such a high level of aggregation still allows for significant diversification within each broad product group. A key factor that can determine the impact of trade preference programs on economic development is the ability of developing countries to take advantage of global trading opportunities. The existence of a preferential tariff is of little use in countries without the ability to produce goods desired by importers, at competitive prices. This ability to produce and trade competitively on world markets, which is termed “trade capacity,” is generally related to having the appropriate economic conditions and institutions that help to attract investment and enhance efficiency. Yet, many developing countries lack of trade capacity prevents them from taking full advantage of opportunities to export goods and services. The lack of trade capacity is due to inadequate economic, legal, and governmental infrastructure. Poor networks of roads, small and outdated ports, inadequate supplies of energy and other utilities, rigid financial institutions, inefficient or corrupt customs bureaus, and poorly educated citizens are some of many obstacles that can make production and exporting difficult and more costly. For example, in Haiti, an apparel manufacturer located in a government-owned industrial park told us they did not have reliable public sources of electricity or water. Therefore, they had to pay for backup electricity generators and trucked-in water to operate their factories. In addition, entrepreneurs in developing countries may have little access to information about markets and export standards or to affordable financing that would enable them to set up a successful export business. Even countries that have developed industries to produce items with strong global markets, with or without the assistance of preferences, may need to improve their trade capacity. For example, mineral commodities such as oil, or agricultural products such sugar and soybeans, are an important source of export income to many developing countries. However, developing a greater diversity of export industries requires new skills, technologies, and investment. While the impact of trade preferences on the development of beneficiary countries remains a subject of debate among economists and other analysts, our fieldwork in several beneficiary countries indicates the diverse range of countries being served, and most countries emphasized their view that U.S. trade preferences are important to their trade and development objectives. The countries include several whose efforts to use U.S. preferences are at nascent stages and several that achieved notable success. We chose to visit Haiti and Ghana because they are among the poorest beneficiaries and ones where mechanisms to take advantage of recently expanded benefits under newer preference programs—HOPE and AGOA— are being put in place. Overall, the people we met in Haiti and Ghana expected their countries will increase their use of the preferential access to the U.S. market, but urged continued U.S. commitment and patience. Following are illustrative observations from our fieldwork in these countries: In-country officials and business representatives in Haiti see preferences as a much-needed engine for creating jobs in the short-term, attracting investment in the medium-term, and fostering growth over the longer-term. Haitian officials recognize Haiti must confront the daunting challenges of repairing its damaged infrastructure and international image and improving security in order to be able to effectively take advantage of the opportunities offered by the HOPE program. Haiti’s base of entrepreneurs with experience in the apparel industry and geographic proximity to the United States are assets that may help the country use the new access provided by HOPE and thereby convince Congress to reenact it in 2011. Ghanaian authorities have put in place policy reforms and are pursuing trade promotion initiatives to encourage the private sector to take advantage of export opportunities provided under AGOA. Authorities noted that hosting the annual AGOA forum among government, private sector, and civil society increased the program’s visibility in the country. However, many of the Ghanaian business people we met were still in the initial stages of exporting to the United States. Additionally, Ghana National Chamber of Commerce officials told us many potential beneficiaries of the program, particularly agricultural producers, are still unfamiliar with the full range of opportunities available under AGOA, and see the program as being primarily targeted to the textile and apparel industry. Like Haiti, Ghana lacks such essential capacity as reliable energy supply and cost-competitive transportation. Yet, both governments were mobilizing and were receiving considerable on-site and other resources from various U.S. government and multilateral agencies to develop customs and port facilities, and navigate U.S. rules and requirements, etc. We picked Brazil and Turkey to visit because these countries have successfully used U.S. trade preferences to export a diverse range of relatively sophisticated manufactured goods. The two countries were also of interest because both Brazil and Turkey rely on their own government and government-affiliated business associations to promote awareness of GSP, with limited assistance from U.S. agencies such as USAID. Both expressed a continued need for preferences, even though their overall economies are growing, and they are among the leading developing country users of U.S. preferences. Following are illustrative observations from our fieldwork in these countries: The government and private sector officials we met in Brazil emphasized that GSP benefits both nations. Information provided GAO shows that more than 90 percent of the value of what Brazil ships to the United States under GSP are raw materials and intermediate or capital goods, some produced by U.S.-affiliated multinationals. Upon arrival in the United States, these intermediate goods are destined for further processing and/or incorporation into U.S. manufactured goods such as cars and power generators. Officials at Brazil’s Commerce and Development agencies have stepped up efforts to promote awareness and use of GSP, seeing it as a valuable tool for helping its poorest regions and boosting participation by smaller businesses in export markets. An analysis by Brazil’s Commerce Ministry shows that Brazil has had more success in exporting manufacturing goods under GSP and that more than 80 percent of the products Brazil exports to the United States under GSP would otherwise face relatively low tariffs (facing MFN tariffs set at or below 5 percent). Yet, the loss of such privileges in competitive need limitations (CNL) decisions has caused actual or likely business contraction and layoffs at two companies on GAO’s schedule of visits (in the automotive part and copper wire industry). The people we met said such preferences are particularly important now as they face intense competition from China, which has displaced them in traditional industries such as leather footwear (which is excluded by statute from the GSP program). Ironically, China’s rise has also coincided with a run-up in demand and prices for Brazil’s commodities, boosting the country’s total exports but disadvantaging its manufactured goods because the Brazilian currency has appreciated. Turkey also has been buffeted by rising commodity prices in sectors such as jewelry. It has been successful in exporting a diverse range of manufactures to the United States under GSP, ranging from stone slabs to steel, and says continuing to do so is vital to its competitiveness. As in Brazil, the Turkish business representatives we met with said that profit margins are so thin in the highly competitive U.S. market they serve that even small preference margins make the difference between being able to sell or being forced to exit entirely. Indeed, Turkey wishes to widen the list of eligible products (e.g., hazelnuts) and expressed concerns over losing GSP access for products such as jewelry and marble that officials indicate have exceeded, or are likely to exceed, CNLs. They attributed exceeding CNLs in part to rising commodity prices, levels of aggregation in the U.S. tariff schedule that are too high for certain products, and the related issue of importer use of broader versus more specific categories to enter goods to avoid complications in customs classification and clearance. Colombia and Kazakstan were selected for high use, as well as their involvement in ongoing liberalization: Colombia, through a free trade agreement with the United States, and Kazakstan, as a result of its efforts to join the WTO. Following are illustrative observations from our fieldwork in these countries: Colombia dominates the ATPA program, and exports to the United States accounted for 20 percent of Colombia’s overall exports in 2006. Relying on ATPA for more than half (54 percent) of its exports to the United States that year, Colombia has attained success in steadily increasing its exports in all but 2 years since the program’s inception in 1991, particularly since the program was expanded in 2002. Yet the range of products it exported under preferences is considerably narrower than that supplied by Brazil or Turkey. To diversify away from coca production and spur participation in international trade, Colombia has pursued improved security, political stabilization, and economic diversification in the years since Plan Colombia was implemented in 2000 and the Andean Trade Preference Program was expanded in 2002. The Department of State and USTR credit Colombia’s efforts and these programs, as well as strong internal and external demand, with revitalizing Colombia’s economy. Colombian business sector spokesmen and government officials with whom we met generally underscored the important role trade preferences have played in allowing certain sectors, notably cut flowers, to compete in the U.S. market; however, they also noted that their country needs to move beyond trade preferences. In March 2007, Colombia’s trade minister publicly stated that his country has effectively exhausted the utility of U.S. trade preferences and is eager to consummate a comprehensive free trade agreement with the United States. This not only will assure continued preferential access to the U.S. market for Colombia’s exports, on which it depends, but provide additional access and involve reciprocal liberalization and rule-of-law changes in such areas as investment and IPR that may help it attract additional investment and innovation. Kazakhstan’s resource-driven economy is also booming based largely on its vast oil, gas, and minerals reserves, which together make up about two-thirds of its economic output. Its exports to the United States reached $1 billion in 2006, of which half was imported using GSP preferences. The country’s development goals include managing its mineral wealth, integrating into the world economy, and diversifying its exports. Despite its goal of becoming a hub for East-West business, Kazakhstan faces many challenges associated with the legacy of the Soviet era, such as legal structures that make business formation and trade difficult and a business mentality of dependence on government subsidies. Geographically, Kazakhstan is challenged in trading with the United States, although opportunities for integrating regionally with the European Union are great. The major goal of Kazakhstan’s trade policy at present is WTO accession. Awareness and interest in the U.S. GSP program was rather limited. In fact, exports of several major products reached CNL limits, and the country did not seek a waiver for its producers. The major GSP export in 2006, copper cathodes, turned out to be more likely a one-time event prompted by factors other than GSP preferences (the normal or MFN tariff rate on this product is just 1 percent). A major producer of the country’s leading preference export told us he sells the commodity at world prices and does not depend on preferences or focus on the U.S. market, due to strong demand and transportation linkages elsewhere. Preference programs balance two key trade-offs. First, programs offer duty-free access to the U.S. market to increase beneficiary trade, to the extent that it does not harm U.S. industries. Product exclusions, country graduation, and product import limits are tools to make this trade-off, although their use has raised concerns that nonbeneficiary countries may gain U.S. market share from a beneficiary’s loss of preferences. Second, policymakers face a trade-off between longer or permanent program duration, which may encourage investment, and shorter renewal periods, which may provide leverage to achieve other policy goals. Finally, the preference programs balance these trade-offs against a backdrop of increasing global trade liberalization. Although multilateral trade liberalization is a primary U.S. trade objective and would be beneficial to most developing countries, liberalization dilutes the marginal value of the preferences to beneficiaries. This may affect their willingness to participate in reciprocal trade liberalization. However, economic studies suggest that the negative effects of preference erosion are outweighed by other factors, most notably the benefits for developing countries associated with open markets. A basic policy trade-off is the extent to which preference programs benefit businesses in beneficiary countries versus those in the United States. As described in appendix III, U.S. preference programs provide duty-free treatment for a little over half of the 10,500 U.S. tariff lines, in addition to those that are already duty-free on an MFN basis for all countries. But, they also exclude many other products from duty-free status, including some that developing countries are capable of producing and exporting. The extent of product exclusions, therefore, may directly affect the ability of some developing countries to use and benefit from the preferences. Some product exclusions were established in preference legislation to protect sensitive U.S. industries from import competition. The GSP statute, for example, prohibits various “import-sensitive” categories of products from being designated as eligible. These include most textiles, apparel, watches, footwear, handbags, luggage, flat goods, work gloves, and leather apparel; import-sensitive electronics, steel, and glass products; and “any other articles which the President determines to be import-sensitive in the context of the Generalized System of Preferences.” In addition, agricultural products subject to a tariff-rate quota are not eligible under GSP for duty-free treatment if such imports exceed the in-quota quantity. The regional preference programs exclude some of these products as well. U.S. tariffs on a number of these excluded products tend to be high. The GSP statutes provide some discretion for the administration to determine which items within some of these product categories are not import-sensitive. Specifically, for electronic, steel, and manufactured and semimanufactured glass products, USTR and ITC officials told us that the President may determine which of these items are eligible for GSP benefits, based on advice from the ITC about import sensitivity. The administration has at times self-initiated such a determination for individual products, but the officials told us it has reexamined eligibility for large numbers of products only within the context of extending new benefits to subsets of countries, namely for LDCs in 1996 and for AGOA suppliers in 2000. More often, it makes determinations for individual products based on petitions filed by interested parties. There is no discretion for administrative product additions for the other product categories specifically excluded by law from GSP eligibility. However, the statutory language for each of these other product categories is based on business conditions as of specific dates—January 1, 1994, for textiles and apparel; June 30, 1989, for watches; and January 1, 1995, for footwear, handbags, luggage, flat goods, work gloves, and leather apparel. We note that U.S. industries have changed in the intervening years, and these statutory provisions may not be up-to-date. For example, in comments to USTR on the GSP program in 2006, the Footwear Distributors and Retailers of America stated that imports now account for 99 percent of U.S. footwear sales and urged that the footwear exclusion be removed from the GSP legislation. According to USTR officials, the initial GSP statute provided that the President could not designate as eligible those “textile and apparel articles which are subject to textile agreements.” Certain handcrafted wall hangings, clothing, and other hand-loomed articles were not covered by the Multi-Fiber Arrangement. In the late 1970s, the agencies administering GSP sought to provide commercial opportunities for handicraft producers of nonimport-sensitive items in interested beneficiary countries. Based on an interagency review, the President determined in 1981 that U.S. imports of certain wall hangings, pillow covers, and carpets and textile floor coverings that had been certified as handmade by the beneficiary country could enter under GSP. USTR officials told us that since that time 15 GSP beneficiaries have entered into such certified textile handicraft agreements; however, by 2007, all but two of the items originally covered by the presidential determination have become MFN duty-free. As noted above, no textile and apparel items can be added to GSP eligibility if they were not on the GSP-eligible tariff list as of January 1, 1994. Studies indicate that even when GSP product exclusions have been liberalized within the context of GSP for LDCs or the regional programs, remaining limits on product eligibility can affect the ability of beneficiary countries to use and benefit from U.S. preference programs. One recent study examined the expansion of tariff lines under AGOA. In agriculture, the study noted, AGOA appears to have liberalized nearly all products, altlhough a substantial portion of agricultural tariff lines are still subject to tariff-rate quotas and, as a result, are not, in effect, fully liberalized. Products not fully liberalized include certain meat products, a large number of dairy products, many sugar products, chocolate, a range of prepared food products, certain tobacco products, and groundnuts (peanuts), the latter being of particular importance to some African countries. The study noted that, in manufacturing, AGOA liberalized additional tariff lines, but the increase is most notable for those countries granted apparel benefits. According to the study, key products that remain excluded are textile products, certain glass products, and certain headwear. A related trade-off involves deciding which developing countries can enjoy additional preferential benefits for products excluded for most preference recipients. One controversy concerns a few LDCs in Asia that are not included in the U.S. regional preference programs, although they are eligible for GSP-LDC benefits. Two of these countries—Bangladesh and Cambodia—have become major producers and exporters of apparel to the United States and have complained about the lack of duty-free access to this country for their goods. For example, Cambodian trade and industry officials argue that it is not fair that many LDCs enjoy preferential access to the U.S. apparel market through the regional preference programs, while Cambodia does not. In comments filed with USTR on possible U.S. proposals at WTO to provide duty-free, quota-free access to least- developed countries, some African and other beneficiary countries, as well as certain U.S. industries, have opposed the idea. African private sector spokesmen have raised concerns that giving preferential access to Bangladesh and Cambodia for apparel might endanger the nascent African apparel export industry that has grown up under AGOA, while other non- LDC developing countries have expressed similar concerns about their own industries. U.S. textile manufacturers have also protested that the possible expansion of apparel benefits to these countries would threaten their textile sales to Latin American clothing producers under the regional preference programs and free trade agreements. However, numerous U.S. importing industries, such as retail groups, are strongly in favor of these proposals. Over the 30-year life of the GSP program, questions about which countries should benefit and how more benefits could be directed to poorer countries have been raised repeatedly. The concerns relate to the original intention that preference programs would confer temporary trade advantages on developing countries, which would eventually become unnecessary as the countries became more competitive. The GSP program has mechanisms to limit duty-free benefits by “graduating” countries that are no longer considered to need preferential treatment, based on income and competitiveness criteria. The U.S. government has used two approaches to graduation: outright removal of a country from GSP eligibility, and the more gradual approach of ending duty-free access for individual products from a country. Once a country’s economy reaches a “high income” level, as indicated by World Bank measures of gross national income per capita, the statute governing GSP requires that the country be graduated from this program. Fifteen countries have been graduated since 1995 on that basis, including, most recently, Antigua and Barbuda, Bahrain, and Barbados in January 2006. Since 1995, nine other countries at high and upper-middle income levels were removed from GSP eligibility because they joined the European Union—most recently, Bulgaria and Romania in December 2006. Program regulations also allow the United States to remove a country from GSP after a review has found it to be “sufficiently developed or competitive.” Four countries or customs territories were graduated on this basis in 1989—Singapore, South Korea, Taiwan, and Hong Kong. Under the regional programs, there are no mechanisms to graduate countries that have reached a more advanced level of development. However, in the last 2 years, five Central American/Caribbean countries were removed from GSP and CBI/CBTPA when they entered free trade agreements with the United States. More commonly, the United States uses import ceilings—CNLs—to end GSP duty-free status for individual products from individual countries if imports reach a certain level. The rationale given by USTR for these limits is that they indicate a country has become a “sufficiently competitive” exporter of the product and that ending preferential benefits in such a case may allow other GSP-eligible countries to expand their access to the U.S. market. The value of trade from GSP beneficiaries that is ineligible for duty-free entry because of the CNL ceiling is substantial. We identified $13 billion in imports in 2006 that could not enter duty-free under GSP due to CNL exclusions—over one-third of the trade from GSP beneficiaries potentially subject to the CNL ceiling. Although the intent of country and product graduation is to redistribute preference benefits more widely among beneficiary countries, some U.S. and country officials with whom we met observe that GSP beneficiary countries will not necessarily benefit from another country’s loss of preference benefits. The benefits cannot be “transferred” directly from one country to another; rather, preferences are a marginal advantage that can make a country’s product competitive only if other factors make it nearly competitive. In fact, the loss of a tariff preference to a given country may give an advantage to a country that is not a beneficiary of U.S. trade preference programs. In the countries we visited, we repeatedly heard concerns that China, or sometimes other countries, would be most likely to gain U.S. imports as a result of a beneficiary’s loss of preferences. As part of an overall review of the GSP program in 2005 and 2006, USTR officials reviewed trade and development indicators for large users of the GSP program to determine whether they could be considered sufficiently competitive in terms of trade in eligible products and, therefore, should no longer be designated as GSP beneficiaries. USTR officials said there are inherent tensions between the program’s statutory economic development and export competitiveness goals. They noted that some of the beneficiaries USTR reviewed were very competitive in certain industries but nevertheless had large numbers of poor people. Agency officials told us that it was important to conduct the overall review in a manner consistent with U.S. WTO obligations under the GATT’s Enabling Clause, which enables developed WTO members to give differential and more favorable treatment to developing countries. Efforts to target benefits to the poorest countries have resulted in the removal of preferences from products important to some U.S. businesses. In 2007, the President revoked eight CNL waivers as a result of legislation passed in December 2006. Consequently, over $3.7 billion of trade in 2006 from six GSP beneficiaries lost duty-free treatment. Members of the business community and members of Congress raised concerns that the revocation of these waivers would harm U.S. business interests while failing to provide more opportunities to poorer beneficiaries. A bill regarding sanctions on Burmese gems, which passed the House of Representatives in December 2007, had included a GSP provision that would have reinstated the CNL waivers for gold jewelry from Thailand and India and would have required the President to review the other revoked waivers. The bill also would have provided for the President to reinstate the other waivers unless ITC determined that the loss of a waiver would neither reduce the current level of U.S. imports of the article from the beneficiary nor benefit countries that are not part of GSP. Policymakers also face a trade-off in setting the duration of preferential benefits in authorizing legislation. Preference beneficiaries and U.S. businesses that import from them agree that longer and more predictable renewal periods for program benefits are desirable. However, some U.S. officials believe that periodic program expirations can be useful as leverage to encourage countries to act in accordance with U.S. interests. Private sector and foreign government representatives have complained that short program renewal periods discourage longer-term productive investments that might be made to take advantage of preferences, such as factories or agribusiness ventures. They would like to see preference programs become permanent or have a longer duration. The private sector Coalition for GSP (Coalition) cites the frequent lapses in GSP between 1993 and 2001, with authorization periods ranging from 10 to 27 months (and gaps between expiration and legislative renewal of 1 to 15 months), as hindering long-term investment in beneficiary countries. Both USTR and the Coalition have attributed the relatively greater growth in GSP use after 2002 to the stability provided by a 5-year program reauthorization at that time. Business people say that predictable program rules and a longer program renewal period are important to them in making business plans and investment decisions in developing countries with confidence when they are based on preference benefits. For example, officials in the Colombian flower industry told us that ATPA’s short time frame and frequent renewals made it difficult to attract investment needed to enable them to compete with other international cut-flower producers. They said investors need certainty about preference benefits for at least 10 years to amortize and project return on investment. Members of Congress have recognized this argument with respect to Africa and, in December 2006, Congress renewed AGOA’s third-country fabric provisions until 2012; AGOA’s general provisions had previously been renewed until 2015. On the other hand, short-term program renewals give Congress more opportunities to respond to changing events and political priorities. Threatening to let benefits lapse can be used as a way to pressure countries to act on an issue. While acknowledging the need for U.S. vigilance in pursuit of its commercial interests, officials at USTR and Labor told us short-term program renewal can have other adverse consequences, such as creating uncertainty for investors and importers interested in using the program. From their perspective, the discretion the administration exercises over continuation of program benefits offers sufficient leverage to achieve policy goals, based on the country’s desire to maintain benefits and the possibility of removing benefits administratively through reviews of country conformity with eligibility requirements. Nevertheless, a recent instance involving ATPA has provided U.S. officials an opportunity to engage with beneficiary countries in the context of program expiration. ATPA was extended for 6 months in December 2006, again for 8 months in June 2007, and for 10 more months on February 29, 2008. These short renewal periods reflected interest in hastening congressional consideration of the free trade agreements with Peru and Colombia and concern about policies adopted by Bolivia and Ecuador that have negatively affected foreign investors. After the most recent ATPA extension, the administration said the extension would provide time to implement the Peru free trade agreement and for Congress to pass the Columbia free trade aggreement. The administration also said it expected to see significant progress with respect to Bolivia and Ecuador’s treatment of foreign investors. Global and bilateral trade liberalization is a primary U.S. trade policy objective, based on the premise that increased trade flows will support economic growth for the United States and other countries. However, international movement toward lowering tariffs and other trade barriers has an unavoidable effect on the marginal value of trade preferences to beneficiaries. Because of this, beneficiary countries’ desire to keep their preferential advantages may generate some internal resistance to multilateral liberalization. As some countries make unilateral decisions to liberalize their national trade policies, and as others enter into bilateral and regional trade agreements that result in lower tariffs among trading partners, countries that rely on preferential margins find the advantages they gain from preferences fading away. The erosion of the value of trade preferences poses yet another trade-off. All of the preference programs include provisions to encourage countries to move into reciprocal and liberalized trading relationships. Indeed, a number of countries that were former beneficiaries of preference programs have gone on to conclude free trade agreements with the United States, and some have joined the ranks of newly industrialized nations. However, members of Congress and some administration officials have raised concerns that some preference beneficiaries are placing their interests in trade preference programs above the broader interest in multilateral liberalization, which the United States has traditionally advocated. They note that, in an effort to maintain their preference benefits, some beneficiary countries have created roadblocks at WTO in the Doha Round of negotiations. This was confirmed by U.S. agency officials we interviewed. The assurance of continued preferential access to the U.S. market has at times, created a disincentive to negotiation of reciprocal free trade agreements. For example, officials at Commerce and Labor told us that the extension of AGOA preferences during the negotiations toward a free trade agreement with members of the Southern African Customs Union may have contributed to the suspension of those negotiations since countries were already granted broad access to the U.S. market. In the past, spokesmen for countries that benefit from trade preferences have told us that any agreement reached under the Doha framework must, at a minimum, provide a significant transition period to allow beneficiary countries to adjust to the loss of preferences. Additionally, they questioned whether it is even fair to expect certain countries, such as small-island states, to survive without some trade preference arrangements under any deal that may be reached through WTO negotiations. As we have noted in previous reports, economic studies predict that global trade liberalization, such as might be achieved in a new WTO agreement from the Doha negotiations, would generally benefit most developing countries. Moreover, with regard to preference erosion and its impact on developing countries, some research has suggested that the negative effects of preference erosion may be outweighed by other factors—in particular, the benefits generated by more open trade on the part of developing countries. For example, one recent study estimates that while a small number of countries, particularly those that currently receive very large benefits under existing preference schemes, could experience a loss of market access, most countries would benefit from the expanded market access due to reduced tariffs under the Doha Round. Another recent study of the impact of preference erosion on development in the CBI countries notes that preference erosion occurred steadily over the 15 years of the study (1984 to 1998). While preference erosion was shown to have a small negative impact on investment and growth in some countries in the CBI region over the period studied, this effect may have been outweighed by the positive effects of increased utilization of preferences. In addition, the author finds the countries’ own trade reforms (openness) may have had a larger impact on development than the trade preferences did. Trade preference programs have proliferated over time, but Congress has not considered U.S. trade preferences as a whole. In response to statutory requirements, agencies pursue different approaches to monitoring compliance with the various criteria set for programs, resulting in a lack of systematic review. There are other differences in key aspects of the preference programs, such the use of trade capacity building in conjunction with opportunities provided under trade preference programs, which is currently most prominent in AGOA. Finally, distinct approaches to reporting and examining the programs limit the United States’ ability to determine the extent to which U.S. trade preferences foster development in beneficiary countries. Over the years, Congress has set up a number of trade preference programs to meet the overall goal of development, as well as specific regional objectives. As a result, U.S. trade preferences have evolved into an increasingly complex array of programs, with many countries participating in more than one of these programs (see fig. 7). Congress generally considers these programs separately partly because these programs have disparate termination dates, and Congress has focused on issues pertaining to individual programs when they have come up for renewal. Proposals from the administration and members of Congress suggest further additions to the preference programs are possible. Of the 137 countries and territories eligible for preference programs, as of January 1, 2007, 78 benefit from more than one (see fig. 8). The reason that many countries benefit from more than one program is that the regional preference programs have been added, as noted above, to further various U.S. foreign policy objectives. The regional programs in effect expand the preferences offered by GSP, but they result in overlap, with various combinations of program eligibility for certain countries. Thus, of the 48 countries to which the President may grant AGOA eligibility, 39 are eligible for AGOA, while 47 are eligible for GSP. The African country of Equatorial Guinea, for example, is ineligible for AGOA, but eligible for GSP, and it exported approximately $1.6 billion in fuel products to the United States under that program in 2006. The 59 countries eligible for only the basic GSP program, such as Argentina or Egypt, are neither LDCs nor part of a regional preference scheme. In the case of ATPA and CBI beneficiary countries, importers may choose whether to enter products eligible for the regional program and GSP under either one. Those importing goods from the Andean or Caribbean areas tend to use ATPA or CBI instead of GSP, due to the more liberal rules of origin and expanded product coverage for these programs. To a certain extent, this has mitigated the uncertainty associated with GSP program lapses. While there is overlap in various aspects of trade preference programs, each program is currently considered separately by Congress based on its distinct timetable and expiration date. Typically, when Congress has considered these programs for renewal, the focus has been on particular issues relevant to specific programs, such as counternarcotics cooperation efforts in the case of ATPA, or phasing out benefits for advanced developing countries in the case of GSP. The oversight difficulties associated with this array of preference programs and distinct timetables is compounded by different statutory review and reporting requirements for agencies. As explained in detail in the next section, in practice, these entail distinct administrative structures and approaches that leave gaps in assessment and use of tools known to be necessary to helping developing countries participate in trade. Congressional deliberations have not provided for cross-programmatic consideration or oversight. However, key congressional leaders appear to want to use this year’s coincidence of expiration dates for ATPA, CBI, and GSP to look more systematically at preference programs and how they can be updated and improved. Two different approaches—a petition process and periodic reviews—have evolved to monitor compliance with criteria set for various trade preference programs. USTR officials explained that the mechanisms for monitoring compliance with the criteria under specific programs reflect the relevant statutory requirements for each. We observed advantages associated with each approach, but individual program reviews appear disconnected and result in gaps. The petition process under GSP and ATPA offers certain benefits over the periodic reviews of all beneficiary countries that take place under AGOA and CBI. These regional programs’ periodic reviews, on the other hand, provide an opportunity to engage beneficiary countries on areas of concern in a more consistent manner. Table 4 illustrates key administrative aspects of the trade preference programs, including the type of reviews followed to determine compliance. GSP reviews of product and country practice petitions have the advantage of adapting the programs to changing market conditions and the concerns of businesses, foreign governments, and others. Most petitions originate outside the government, and agency officials, and NGO and private sector representatives cited the value of the petition process in bringing forward concerns related to intellectual property rights and workers’ rights. The process also brings to bear the knowledge of NGOs and others about problems in these areas and helps the government pursue credible cases. Private sector and labor representatives also said that they appreciated the petition process because it compels a formal decision from the government on the merits of a complaint and draws public attention to an issue. The process allows U.S. petitioners to seek and obtain resolution of trade- related concerns. For example, from 2001 through 2006, USTR conducted an investigation on copyright piracy and enforcement in Brazil in response to a petition filed under GSP by a coalition of seven trade associations concerned about IPR violations in that country. The investigation resulted in an agreement between the U.S. and Brazilian governments, hailed by the petitioner, to increase antipiracy raids in well-known marketplaces, establish antipiracy task forces at the state and local level in Brazil, and enhance deterrence through criminal prosecutions, among other actions. However, a petition-driven process also can result in a long time passing between reviews of country compliance with the criteria for participation. From 2001 to 2006, when the number of GSP beneficiaries ranged from 146 to 132, USTR considered petitions against 32 countries. While some of these nations are reviewed under the regional preference programs, approximately three-quarters of the countries eligible only for GSP did not get examined at all for their conformity with eligibility criteria from 2001 through 2006. Long periods of time passed between overall reviews of GSP as well. As mentioned earlier, USTR initiated an overall review of the GSP program in October 2005. USTR completed the last general review of the program more than 18 years earlier in January 1987. A U.S. official told us that some of the countries reviewed frequently are not necessarily those that perform the worst relative to the criteria for participation, but rather those countries of most concern to particular groups, such as businesses or NGOs. In this sense, U.S. government resources may be unduly invested in performing repeated reviews of a country that is of particular concern to a given interest group, while other countries with potential problems receive substantially less scrutiny. A second weakness is that the petition-driven review fails to systematically incorporate other U.S. efforts in areas such as IPR protection and efforts to counter trafficking in persons. The centerpiece of U.S. policy efforts to increase IPR is the annual Special 301 process. USTR cites the GSP process as a key part of its mission to promote IPR overseas. Moreover, GAO reviewed the 2006 Special 301 report and found that over half of the 48 countries cited by USTR for concerns with respect to the provision of adequate and effective protection of IPR in 2006 were U.S. preference program beneficiaries. However, USTR did not accept any new petitions to review beneficiaries against the IPR criteria for participation in 2006. USTR officials observed to us that the placement of a country on the Watch List or Priority Watch List did not constitute a USG finding that the country failed to provide adequate and effective IPR protection. Rather, placement of a country on these lists indicates that particular problems exist in the country with respect to IPR protection, enforcement, or market access for persons relying on intellectual property. Additionally, industry officials told us that the administration has been reluctant to threaten removal of countries from GSP for lack of compliance with IPR protection in recent years, calling into question whether the leverage provided by the trade preferences is put to effective use. While it is possible that the administration may choose not to remove countries as a result of Special 301 designations, the lack of review, under the GSP provisions, of any of the 26 countries cited makes it appear that no linkage exists between these issues. U.S. efforts to combat trafficking in persons is another area where criteria for participation in trade preferences programs may have some bearing, although USTR officials noted that there is not a specific link between the preference program criteria and the Trafficking and Victims Prevention Act of 2000. Both State and the Department of Justice cite Labor’s Findings on the Worst Forms of Child Labor as among the U.S. government’s efforts to combat trafficking in persons. State issues an annual report that analyzes and ranks foreign governments’ compliance with minimum standards to eliminate trafficking in persons. State also prepares an annual report that discusses the status of internationally recognized worker rights within each GSP beneficiary. Twenty-seven of the 48 countries on the Tier 2 Watch List or in Tier 3 in the June 2007 Trafficking in Persons report are preference beneficiaries. In congressional hearings, members and a witness have cited concerns that countries in Tiers 2 and 3 receive trade benefits. Preference beneficiaries on the Tier 2 watch list include Argentina, Armenia, South Africa, Ukraine, and India, and beneficiaries on the Tier 3 list include Algeria, Equatorial Guinea, Uzbekistan, and Venezuela. At times, concerns in some of these countries may have been addressed through the regional programs. For example, the country reports contained in the 2007 Comprehensive Report on U.S. Trade and Investment Policy Toward Sub-Saharan Africa and Implementation of the African Growth and Opportunity Act cite concerns in beneficiary countries with respect to child labor and trafficking in persons, showing consideration of these issues in the eligibility determinations. In other countries such as Venezuela, Algeria, and Uzbekistan, the U.S. government has not received any petitions to initiate an examination of performance against any of the GSP eligibility criteria related to trafficking in persons in the last 5 years. Consequently, these countries have not been reviewed against those criteria for participation. As noted above, it is possible that the administration might choose not to remove countries as a result of these reviews, but it appears that no linkage between these issues exists, given the lack of official reviews. The periodic reviews under the regional programs offer more timely and consistent evaluations of country performance against the criteria for participation. Among the regional programs, AGOA has the most intensive evaluation of country performance against the criteria for participation. AGOA requires the President to determine annually whether Sub-Saharan African countries are, or remain, eligible for the program. GAO found that, between 2001 and 2007, the President terminated eligibility four times and conferred eligibility eight times. Between 2001 and 2006, one country was removed and reinstated for GSP, and another country was reinstated after being removed in 1990. No country lost eligibility under the ATPA or CBI programs. The key difference between the AGOA review and the CBI and ATPA reviews is that only AGOA requires a determination periodically as to whether a country should remain a beneficiary. A USTR official testified that AGOA’s annual review process has resulted in improved country performance under the eligibility criteria. In July 2007, a senior USTR official testified before the Subcommittee on Africa and Global Health of the House Foreign Affairs Committee that the President had removed, or threatened to remove, AGOA beneficiaries that did not meet the criteria for participation. This official noted that some of these countries had taken action to meet the criteria, and countries such as Liberia and Mauritania, which had been ineligible, were now eligible. However, U.S. officials also commented that the AGOA review is extremely time-consuming and demands a considerable investment of staff resources, since each beneficiary country must be reviewed on its performance on a range of criteria, such as respect for the rule of law and poverty reduction efforts. Moreover, these reviews must be updated on an annual basis. Despite more regular and comprehensive reviews, 11 countries that are in regional programs were later subject of GSP complaints in the 2001 to 2006 period. In several cases, the petition-based examination associated with the GSP process validated and resulted in further progress in resolving concerns with regional partners such as Guatemala, Swaziland, and Uganda on labor issues. For example, in 2005, the American Federation of Labor and Congress of Industrial Organizations filed a petition regarding Uganda’s performance against workers’ rights criteria under GSP and AGOA. The petition led to an interagency investigation that was closed after Uganda enacted new legislation facilitating organization of unions, among other things. A Labor official told us that these issues had not been remedied under the AGOA review. Many developing countries have expressed concern about their inability to take advantage of global trading opportunities because they lack the capacity to participate in international trade. The United States considers the ability of these countries to participate in and benefit from the global trading system key factors in promoting economic development, and has provided trade capacity building (TCB) assistance, to help developing countries more effectively take advantage of trade preferences, among other purposes. However, we found agencies pursue different approaches with regard to using TCB in conjunction with trade preference programs, with AGOA having the strongest link. AGOA requires the administration to produce an annual report on the U.S. trade and investment policy for Sub-Saharan Africa and the implementation of AGOA. The report includes information about trade capacity building efforts undertaken in the region by U.S. agencies such as the Department of Agriculture and USAID. Sub-Saharan Africa has also been the primary focus of U.S. TCB efforts linked to the preference programs, with the United States allocating $394 million in fiscal year 2006 to that continent. A USTR official noted that linkage to TCB in AGOA’s authorizing legislation was useful for USTR as leverage with U.S. agencies that have development assistance funding to target greater resources that help developing countries take advantage of opportunities provided by trade preferences. In our field work and research, we observed USAID efforts to improve the business and regulatory environments in Sub-Saharan Africa, including preparing private sector enterprises to navigate U.S. import regulations, coaching small businesses on access financial services for trade and investment, and facilitating investments in trade-related infrastructure. Several U.S. officials said that the annual AGOA Forum (Forum) also contributed to the stronger linkage between TCB and trade opportunities offered under the program. A USTR official told us that the Forum brings USAID, Millennium Challenge Corporation, and other U.S. officials together to focus on the program and that having agency leaders attend the Forum makes a big difference in generating business interest in the region. A USAID official told us that the Forum also provided the opportunity for African entrepreneurs to interact directly with senior members of the U.S. government. Although AGOA authorizing legislation refers to trade capacity building assistance, USTR officials noted that Congress has not appropriated funds specifically for that purpose. In other regions of the world, U.S. trade capacity building assistance has less linkage to trade preferences. For example, none of the other trade preference programs direct the relevant agencies to convene regularly to discuss how the program’s implementation affects trade opportunities. Some agencies refer to trade programs in developing their assistance efforts to non-African regions and countries. For example, USAID notes the need for more resources in its strategic plan to improve the business environment and enable local businesses to take advantage of HOPE. Further, other U.S. trade initiatives link market access opportunities with trade capacity building assistance, such as CAFTA-DR. Separate reporting for the various preference programs, while consistent with statutory requirements, makes it difficult to measure progress toward achieving the fundamental and shared goal of trade preferences, namely economic development of beneficiaries. The effect of trade preferences on beneficiary countries’ economic development is not assessed in a cross- programmatic manner that would examine progress made under preference programs. U.S. agencies do prepare reports that attempt to measure the effects on economic development of certain trade preference programs, but not all. The law requires only one program to directly report on impact on the beneficiaries. In addition, even when agencies report on the economic effect of some of these programs, different approaches are used, resulting in disparate analyses that are not readily comparable. As noted earlier in this report, trade preferences are fundamentally intended to promote development in beneficiary countries by providing enhanced opportunities for their products to access the U.S. market. In its 2006 to 2011 strategic plan, USTR notes that one of its objectives is to apply “U.S. trade preference programs in a manner that contributes to economic development in beneficiary countries.” However, there is no formal cross- programmatic examination of the preference programs collectively. As shown in table 4, USTR pursues different approaches to administering these programs, and does not consider the preference programs jointly, with respect to their performance. Moreover, there is no evaluation of how trade preferences, as a whole, affect economic development in beneficiary countries. In response to statutory requirements, several government agencies report on certain economic aspects of the regional trade preference programs and their effects on specific countries or groups of countries, but these agencies do not report on the economic development impact of GSP. Agency officials noted that they strive to comply with statutory reporting requirements and, through the TPSC, they coordinate with each other on various aspects of administering these programs, including reporting. This reporting, nevertheless, is done on a program-by-program basis. For example, USTR has produced three reports to Congress on the operation of ATPA. The ITC also issues biennial reports on ATPA’s impact on U.S. industries and consumers and on drug crop eradication and crop substitution. Additionally, USTR prepares a biennial report for Congress on CBI that highlights increases in overall U.S. imports from the countries in the program. Similarly, ITC reports on the CBI program’s impact on beneficiaries on a biennial basis, the only report required by statute to address the impact on the beneficiaries. Finally, USTR produces an annual report on the implementation of AGOA that highlights trade and investment trends in Sub-Saharan Africa. However, there is no comparable periodic reporting on the effect of GSP on the economic development of countries covered by that program. USTR officials told us that the vehicles they use for reporting on the GSP program are the annual Report of the President of the United States on the Trade Agreements Program and the annual Trade Policy Agenda. Discussion of the GSP program in these documents focuses on product coverage and country conformity with eligibility criteria, not on the impact of benefits to beneficiary developing countries in terms of trade growth or economic development. Different approaches used to measure the effects of trade preference programs on beneficiary countries, while consistent with statutory reporting requirements, produce disparate data and analysis that are not readily comparable to evaluate how these programs advance economic development—their fundamental goal. For instance, USTR’s report on the ATPA provides some examples that illustrate the role of the program in promoting exports and development in each of the four beneficiary countries and refers to analyses by the ITC and Labor on some aspects of the economic impact of ATPA. On the other hand, ITC reporting on the ATPA provides some material on exports and economic diversification for countries under the program. USTR’s reporting on CBI highlights overall and country-specific increases in U.S. imports from countries in the CBI program. The report includes discussions on individual countries, which generally do not evaluate the impact of CBI on the exports or development of the beneficiaries. The ITC reports on the impact of CBI examine how that program affects those countries that have relatively large trade flows with the United States. The trade profile for the region presented in this report has shifted over time, with certain countries receiving more emphasis in earlier reports while later iterations focus on others. USTR’s comprehensive report on trade and investment in Sub-Saharan Africa and the implementation of AGOA provides an overview of trade and investment trends in participating Sub-Saharan countries, reviews economic integration efforts at the regional and subregional level, and discusses participation by AGOA countries in the WTO. Finally, while there is no regular reporting on the economic impact of GSP on beneficiary countries, in 1980, the administration prepared a statutorily required report to Congress on the first 5 years of operation of the GSP program. That report included an analysis of the impact of the GSP on developing country economies. This appears to have been a one-time report, and USTR officials confirmed that no further such reports were prepared. Thus, while there is an abundance of reporting on various aspects of the economic effects of trade preference programs on beneficiary countries, the analyses and data presented in these reports is typically quite dissimilar and does not lend itself for use in evaluating the overall effects of trade preferences. Congress created these programs over the years to address compelling trade and foreign policy objectives. The programs are important to individual businesses and industries, both domestically and internationally. Additionally, the criteria for participation associated with the programs have served as an important tool to advance U.S. foreign and trade policy objectives. The preference programs have evolved over time to accommodate not only the general goal of trade-led development, but regional interests, such as counternarcotics efforts in ATPA. Changes to the preferences programs in the past have had an impact on beneficiaries’ trade profiles with the United States, by stimulating export growth to this country. Much of the increased exports coincided with congressional expansions of the programs in 2000 and 2002 to cover key products. However, U.S. trade preferences are neither administered nor evaluated on a cross-programmatic basis. A lack of systematic evaluation limits any judgment about the extent to which the collection of U.S. trade preference programs has increased trade and fostered development in beneficiary countries. While evaluations may occur to determine whether countries should retain eligibility for preferences, such inquiries have not been made regularly or in a consistent manner across the programs or beneficiary countries. Two different approaches have evolved to monitor compliance with criteria set for various trade preference programs, and we observed advantages associated with each approach, but individual program reviews result in gaps and appear disconnected from other on-going U.S. government efforts, such as the Special 301 process. Further, the petition- driven process can result in a long time passing between reviews of country compliance with the criteria for participation. There are also certain practices, such as stronger links between preference benefits and trade capacity building efforts in the AGOA program, that may be advantageous to some of the other programs. A distinct reporting approach for each program limits the United States’ ability to determine the extent to which U.S. trade preference programs as a whole foster development in beneficiary countries. However, the programs’ positive impact on developing economies may be attenuated because the United States does not extend preferential access to products that are important exports of beneficiary countries and because the United States imposes complex entry requirements for some products. As Congress deliberates on whether to renew the ATPA, CBTPA, and GSP programs this calendar year, it should consider whether a more integrated approach would better ensure programs meet shared goals. Specifically, Congress should consider which elements of the approaches used by agencies to administer these programs, such as petition-initiated compliance reviews or periodic assessment of all countries under certain programs, have benefits that may be applied more broadly to trade preference programs in general. Congress should also consider streamlining various program reporting requirements to facilitate evaluating the programs’ progress in meeting their shared economic development goal. To ensure that these programs, as a whole, meet their shared goals, we recommend USTR undertake the following two actions: work through the TPSC and its associated agencies to consider ways to administer, evaluate, and report on preference programs in a more integrated manner, and periodically convene the TPSC to discuss the programs jointly to determine what lessons can be learned from the various provisions concerning matters such as linkages to trade capacity building. Additionally, to ensure that beneficiary countries are in compliance with program criteria, we recommend that USTR should also periodically review preference beneficiaries that have not otherwise been reviewed by virtue of their membership in the regional programs. We provided a draft of this report to USTR; the Departments of Agriculture, Commerce, Homeland Security, Labor, State, and Treasury; USAID and ITC. USTR, and the Departments of Agriculture, Labor, Commerce, Treasury, and State provided extensive technical comments on an interagency basis. The Departments of Homeland Security, Labor, and State, and ITC also provided separate technical comments. We have incorporated these comments where appropriate. USTR indicated that it would report on the actions taken in response to the recommendations in a letter, within 60 days of public issuance of this report, as required under U.S. law. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the U.S. Trade Representative; the Secretaries of Agriculture, Commerce, Homeland Security, Labor, State, and the Treasury; the Administrator of USAID; and the Chairman of ITC. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. In this report, we (1) describe how U.S. preference programs affect the United States, (2) review the effects of the programs on exports and development of foreign beneficiaries, (3) identify trade-offs facing the programs, and (4) evaluate the overall approach to preference programs. We followed the same overall methodology to complete objectives 1, 3, and 4. We reviewed and analyzed U.S. laws and regulations, authoritative international trade reports/documents describing the impact of trade preference programs on the United States, such as the biennial impact studies from the U.S. International Trade Commission (ITC) on the Caribbean Basin Initiative (CBI) and the Andean Trade Preference Act (ATPA), and periodicals. We interviewed officials from agencies participating in the Trade Policy Staff Committee—including the Office of the U.S. Trade Representative (USTR); the Departments of Agriculture, Commerce, Labor, State, and the Treasury; U.S. Customs and Border Protection; and ITC— regarding the impact of preferences on the U.S. economy. We also interviewed representatives of businesses that used the preference programs and nongovernmental organizations (NGOs) that have filed petitions under the programs. We reviewed academic, World Trade Organization, and other research studies, on the effects of preference erosion on developing countries. In addition, we analyzed the 2006 U.S. government reports on the Special 301 process, the August 2007 report on the Worst Forms of Child Labor, and finally, the 2007 State Department report on Trafficking in Persons. For information on key features and use of U.S. preference programs, we drew from findings from a previous GAO report on U.S. preference programs, International Trade: An Overview of Use of U.S. Trade Preference Programs by Beneficiaries and U.S. Administrative Reviews (GAO-07-1209). To review the effects of U.S. preference programs on exports and development of foreign beneficiaries, we reviewed relevant academic, government and other literature. Particularly useful were recent broad reviews of the trade preferences literature found in (1) Bernard Hoekman and Caglar Ozden, Trade Preferences and Differential Treatment of Developing Countries (Cheltenham, UK, and Northampton, MA: Edward Elgar Publishing, 2006) and (2) Caglar Ozden and Eric Reinhardt, “Unilateral Preference Programs: The Evidence,” chapter 6, in Simon J. Everett and Bernard Hoekman, eds., Economic Development and Multilateral Trade Cooperation (Washington, D.C.: The World Bank and Palgrave Macmillan, 2006). We also conducted extensive analysis of the U.S. tariff schedule and U.S. trade data published by the ITC. Our analysis focuses on 2006 data except where we engaged in analysis of historical trends. We relied on the 2006 edition of the official U.S. tariff schedule from the ITC to identify products (tariff lines) eligible for duty-free treatment under one or more U.S. trade preference programs, as well as the countries designated as eligible for each program. We also used ITC data to analyze Census data trends in overall U.S. imports, imports from preference beneficiaries, and imports actually entered under U.S. trade preference programs and to compute measures such as program coverage and utilization and the diversification of U.S. preference imports. More detailed information about our data analysis is contained in appendix II. Furthermore, we interviewed officials from the Office of the U.S. Trade Representative; the Departments of Agriculture, Commerce, Labor, State, and the Treasury; U.S. Customs and Border Protection; and ITC regarding the effects of preferences on foreign beneficiaries. In addition, we attended the sixth AGOA Forum in Accra, Ghana, in July 2007. We also traveled to Brazil, Colombia, Haiti, Kazakhstan, and Turkey to meet with U.S. embassy officials, foreign officials, and industry groups using U.S. preference programs to discuss the issues mentioned above. We selected these countries based on representation on preference program eligibility and income levels according to the World Bank and United Nations (see table 5). Additionally, we chose to visit Haiti and Ghana because they are among the poorest beneficiaries and ones where mechanisms to take advantage of recently expanded benefits under newer preference programs—Haitian Hemispheric Opportunity through Partnership Encouragement (HOPE) and African Growth and Opportunity Act (AGOA)—are being put in place. Also, Ghana was the site of the annual AGOA Forum. We selected Brazil and Turkey to visit because these countries have successfully used U.S. trade preferences to export a diverse range of relatively sophisticated manufactured goods. We chose Colombia and Kazakhstan because they are large users of preference programs and are both undertaking broader liberalization efforts. Colombia has completed a free trade agreement with the United States, and Kazakhstan is trying to join the WTO. In addition, we selected these countries to gain perspective on the spectrum of issues related to usage and capacity of each of the programs in country. Brazil is a top user of the Generalized System of Preferences (GSP) program since the 1970s; Colombia is an extensive user of ATPA; Ghana represents the African countries under AGOA that are dealing with internal infrastructure issues that can limit their use of the preference programs; Haiti is an historic user of CBI and is in the beginning stages of implementing HOPE; Kazakhstan is an extensive user of GSP and is undergoing high liberalization; and Turkey is also another high user of GSP and exports sophisticated manufactured goods to the United States. We conducted this performance audit from March 2007 to February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional information relevant to the data analysis contained in this report. It includes information about the data used, definitions of program, product and country groupings, and definitions relevant to various program measures used. We relied on the 2006 edition of the official U.S. tariff schedule (Harmonized Tariff Schedule ) from the ITC to identify products (tariff lines) eligible for duty-free treatment under one or more U.S. trade preference programs, as well as the countries designated as eligible for each program (beneficiaries or beneficiary countries). We considered any country designated for benefits for all or part of 2006 to be beneficiaries. We relied on official U.S. trade statistics for imports to analyze trends in overall U.S. imports, imports from preference beneficiaries, and imports actually entered under U.S. trade preference programs. Data for time series are in constant 2006 U.S. dollars. We made an adjustment for program and product groupings primarily pertaining to apparel such that those apparel items normally classified under HTS Chapters 61-63 eligible to enter duty-free under regional preference programs if they meet specified rules of origin, as specified in HTS Chapter 98, were identified and marked with a # sign. This accounts for the R#, J#, and D# in the program groupings below. GSP: In terms of products, we defined products covered by GSP as the sum of all tariff lines designated as A or A* in the 2006 U.S. tariff schedule. In terms of countries, all countries that were designated as eligible for GSP at any point in 2006 were considered beneficiaries. GSP-least developed countries (LDC): In terms of products, we defined products covered by GSP-LDC as all tariff lines designated as A+. In terms of countries, all countries that were designated as eligible for GSP-LDC at any point in 2006 were considered beneficiaries. CBI: We defined this category to include products covered by CBI as E or E*, and products covered by CBTPA as R or R#. In terms of countries, all countries that were designated as eligible at any point in 2006 were considered beneficiaries. It should be noted that some of the countries in the 2006 sample have now lost eligibility for benefits under CBI due to the entry into force of the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR) as follows: Dominican Republic (March 2007), El Salvador (March 2006), Guatemala (July 2006), Honduras (April 2006), and Nicaragua (April 2006). ATPA: We defined products covered by ATPA as J, J*, and products covered by Andean Trade Promotion and Drug Eradication Act (ATPDEA) as J#, J+. We defined countries as Bolivia, Colombia, Ecuador, and Peru. AGOA: We defined products covered by AGOA as D, D#. We defined countries covered by AGOA as all countries eligible for the program at any point in 2006. In order to examine broad groups of products, we organized the HTS product chapters into 12 sectors as follows: 1. Animal and plant products (HTS, chapters 1-15) 2. Prepared food, beverages, spirits, and tobacco (HTS, chapters 16-24) 3. Chemicals and plastics (HTS, chapters 25, 26, 28-40) 4. Wood and paper products (HTS, chapters 44-49) 5. Textiles, leather, and footwear (HTS, chapters 41-43, 50-60, 64-66) 6. Glassware, precious metals and stones, jewelry (HTS, chapters 68-71) 7. Base metals and articles of base metals (HTS, chapters 72-81 and 83) 8. Machinery, electronics, and high-tech apparatus (HTS, chapters 82, 84, 85, 90) 9. Aircraft, autos and other transportation (HTS, chapters 86-89) 10. Miscellaneous manufacturing (HTS, chapters 91-97) 11. Fuels (HTS, chapter 27) 12. Apparel (HTS, chapters 61-63) With the exception of textiles and apparel, for figure 5, we use the more aggregated groupings presented in our last report. We used the same sample of countries for analysis of import trends over time. Specifically, we assigned each country to a country group based on their eligibility and country income category in 2006. When time series analysis was done, it is thus for “2006 program beneficiaries” and “2006 country income group” rather than the actual program beneficiaries or actual income groups at earlier points in time. Numerous countries have been removed from programs over the 1992-2006 period, mostly due to attaining high-income status (e.g., Cyprus and Aruba), attaining overall competitiveness (Malaysia), joining the European Union (e.g., Hungary and Poland), or entering into a free trade agreement with the United States (e.g., Mexico and Morocco). For additional information on eligibility for programs by country, see appendix III of GAO-07-1209. AGOA countries: Those countries designated as eligible for the AGOA program at any point in 2006. All of these countries are eligible for GSP, and some of these countries are eligible for GSP-LDC. ATPA countries: Those countries eligible for ATPA at any point in 2006. All of these countries are also eligible for ATPDEA and GSP. CBI: Those countries designated as eligible for the Caribbean Basin Economic Recovery Act (CBERA) at any point in 2006. Some of these countries are also eligible for the Caribbean Basin Trade Partnership Act (CBTPA) and GSP. GSP-only countries: Those countries only designated as eligible for the GSP program. Country income groupings: We relied on World Bank data on country income levels. We relied on United Nations designations of least-developed countries and for data on country income when World Bank data was unavailable. Covered products: We defined covered products as all items identified in the 2006 U.S. tariff schedule as eligible for a preference program. We defined products covered by GSP as the sum of all tariff lines designated as A or A* in the U.S. tariff schedule. We defined products covered by GSP- LDC as all tariff lines designated as A+. We defined products covered by CBI as E, E* and products covered by CBTPA as R, R#. We defined products covered by ATPA as J, J*, and products covered by ATPDEA as J#, J+. We defined products covered by AGOA as D, D#. Eligible beneficiary(ies): We used the term eligible beneficiary for any countries designated as eligible for a particular preference program. The term eligible beneficiaries is used for all countries designated as eligible for a particular program. Country income category: We relied on World Bank data on country income levels. We relied on UN designations of least-developed countries. Dutiable products/imports: We defined dutiable products as all products that were subject to most favored nation (MFN) tariffs that are greater than zero in 2006. We defined the value of dutiable imports as total U.S. imports minus total imports of MFN duty-free products. Preference eligible imports: We defined preference eligible imports as the value of imports of covered products from eligible beneficiaries. Preference imports: We defined preference imports as the value of imports actually entered under a given preference program or programs. Preference margins: The difference between the otherwise applicable or MFN tariff rate and the rate at which the product is eligible to enter under U.S. preference programs. Most products covered by preferences enter duty-free, but some products enter at reduced (nonzero) duties. We relied on others’ estimates of U.S. preference margins, specifically those by a team of ITC and World Bank economists given responsibility for preparing estimates for U.S. programs as part of a multicountry study organized by the World Bank. Coverage: We considered coverage relative to two metrics: (1) the number of lines in the U.S. tariff schedule and (2) the total value of imports of covered products divided by the total value of imports of dutiable products (i.e., dutiable imports) from each preference partner. (See above for definitions of “covered products” and dutiable products.) Utilization: We calculated this as a ratio of the value of preference imports (imports actually entering under U.S. preferences) relative to (divided by) the value of imports of covered products. Program averages for these measures were calculated by: For coverage, summing the value of preference-eligible imports from all partners and then dividing it by the sum of the value of dutiable imports from all partners. For utilization, summing the total value of preference imports from all partners, and then dividing it by the total value of imports of covered products from all partners, that is, the sum of each partner’s covered products. Country averages related to total preference program coverage and utilization measures were calculated by: For coverage, summing the value of preference-eligible imports under all programs for each partner, including adjusting to avoid double counting where a product is covered by more than one program, and then dividing by the value of dutiable imports from that partner. For utilization, summing the value of preference imports from that country actually entering under preferences, adjusting to avoid double counting, and then dividing by the value of imports of covered products from that country. ) where x represents the import/export value of the ith commodity, X is the country’s total imports/exports to the United States in 2006, where N is the number of products. The index value (H*) ranges from 0 to 1. For example, if the products are evenly distributed the value of the index would be 1, and the more concentrated the product distribution, the closer the value is to 0. It is observed that the index is a function of the mean and variance of the value of imports/exports share in different commodity groups. To assess the opportunities extended to developing countries under U.S. preference programs, we examined the scope of programs’ coverage by beneficiary and product, the size of tariff cuts (or margins of preference), and some eligibility conditions that can affect the ability of beneficiaries to access program opportunities. We also examined the extent to which countries are using the available opportunities. Our analysis of U.S. tariff and trade data shows that duty-free coverage under U.S. trade preference programs has increased over time. Considered in combination, U.S. preference programs now extend duty-free status to most of the product lines in the U.S. tariff schedule. However, coverage varies notably by program, beneficiary, and product. Because eligibility for duty-free status is cumulative in that countries eligible for one preference program may also be granted additional preferences depending on their income and regional memberships, the potential duty-free access for particular countries can vary substantially. Figure 9 shows that, as of 2006, the countries eligible for GSP only were accorded duty-free access to 69 percent of the total number of tariff lines in the U.S. tariff schedule or 7,285 lines, composed of 3,879 MFN duty-free lines, and 3,406 additional lines that are duty-free under GSP. All three of the subsequently enacted regional programs, and their enhancements, improve upon GSP to varying degrees. The expansion of GSP for LDCs in 1996 also increased the number of duty- free lines for LDC partners. The proportion of tariff lines accorded duty-free status also varies by product. Figure 10 shows the distribution of dutiable and duty-free lines by product group. GSP alone offers relatively extensive duty-free coverage to certain manufactured goods, such as chemicals and plastics; glassware, precious metals, and jewelry; and machinery and electronics; where coverage exceeds 40 percent of tariff lines. However, duty-free coverage is much more limited for other product groups. Textiles, footwear, leather, and apparel are product groups where duties still apply to the most and highest percentage of lines, but where regional programs offer notable improvements in coverage over GSP. For example, with AGOA’s enactment and the enhancements of CBI and ATPA offered since 2002, 33 percent of apparel lines are eligible to enter duty-free under regional programs, and 43 percent of apparel lines altogether (including MFN and GSP) have duty-free access. Coverage can also be examined relative to imports from beneficiary countries using the ratio of preference eligible imports to total dutiable imports from beneficiaries eligible for particular programs. Our analysis (see table 6) shows that: (1) countries eligible for only GSP have the least coverage of partners’ dutiable imports—approximately 25 percent, (2) regional programs and GSP for LDC’s have much higher coverage of partners’ dutiable imports, and (3) country variations in coverage are wide. For example, 35 GSP beneficiaries including Lebanon, Paraguay, Somalia, and Zimbabwe have high coverage rates, exceeding 75 percent of the value of their dutiable imports. Yet, 48 GSP beneficiaries such as Bangladesh, Egypt, Pakistan, and Uzbekistan have low coverage rates (less than 25 percent of dutiable imports). The value and effectiveness of tariff preferences depends on the magnitude of the tariff that would otherwise be imposed on imported products, often referred to as the preference margin. Preferences can have an impact only if there is a nonzero tariff that otherwise would apply in the U.S. market. Moreover, if the MFN (normally applicable) tariff on a product is negligible, the advantage provided by preferences can be so small as to become an insignificant factor in trade decisions. A recent effort to quantify margins of preferences across all U.S. preference programs by staff economists at the ITC and the World Bank shows that preference margins are relatively high for apparel products, as well as certain agricultural goods (melons, cut flowers, frozen orange juice, raw cane sugar, and asparagus); they tend to be relatively low for other products and fairly uniform among programs. Specifically, the authors found the following: Across member countries and all eligible U.S. nonagricultural imports, AGOA preference margins were the highest on average (14 percent) in 2003. CBTPA preference margins ranked second with an average of 9 percent, and ATPA preference margins third with an average of 8 percent. Nonapparel preference margins average 3 percent to 5 percent for ATPA, CBTPA, and CBERA countries and show little variation across countries within each program. AGOA nonapparel preference margins are much higher—5 percent to 10 percent for more than half the countries, and 10 percent to 20 percent for a few. Average apparel margins under AGOA, CBTPA, and ATPA are two or three times as high as those for nonapparel for nearly all preference beneficiary countries. Despite its importance in AGOA trade, average petroleum preference margins by country did not exceed 2 percent, and most were well below 1 percent. All in all, the authors conclude, while “the potential duty savings from all U.S. preference programs represent a very small share of beneficiaries’ dutiable exports to the United States, countries in the CBTPA and those in the AGOA-LDC program show duty savings exceeding 10 percent of their dutiable exports to the United States.” In fact, the potential duty savings for 35 countries---all but 3 of whom qualify for regional programs—exceed 5 percent of the value of their dutiable exports to the United States. As a result, they find that preferences are sufficiently important to 29 countries’ exports to warrant concern over the impact of preference erosion due to multilateral and bilateral liberalization. At the same time, they note that some of this liberalization has since occurred, with the phase-out of global textile quotas in 2005. Conditions on product entry are also a significant factor affecting opportunities and trade under U.S. preference programs. Two specific conditions, “competitive need limits” and “rules of origin,” illustrate how administration of program provisions, although addressing important policy considerations, may affect the ability of beneficiary countries to fully access the opportunities otherwise offered by U.S. preference programs. GSP places export ceilings or “competitive need limits” (CNL) on eligible products for certain beneficiaries that exceed specified value and import market share thresholds. (LDCs and AGOA beneficiaries are exempt.) Our analysis of 2006 data shows that some 37 percent of the value of imports of GSP products from non-LDC, non-AGOA GSP beneficiaries—or $13 billion of the $35 billion—were excluded from entering duty-free under GSP largely due to CNLs. Researchers also warn that rules of origin and related paperwork are often complex and can raise costs. As a result, it may not be worth incurring the expense of compliance to use preferences. Rules of origin for U.S. trade preference programs typically specify a minimum percentage value-added to the entering product that must come from the beneficiary country in order to qualify for duty-free treatment. However, some programs allow countries to “cumulate” inputs from other countries or regions. More complex rules apply to some products, notably apparel. The fact that U.S. Customs and Border Protection—the U.S. agency charged with enforcing such rules when goods enter the United States—used a 70-page PowerPoint presentation to train its officers on the conditions associated with apparel access under U.S. preference programs is illustrative of the complexity of such rules. For example, our meetings with CBP and statements by Haitian textile industry groups indicate that some of the rules of origin for HOPE are highly complex to administer and use. Indeed, as recently as late November, 2007 industry sources had indicated to us that HOPE has yet to become fully operational for Haiti to benefit because of delays in issuing export visas, and the complicated nature of HOPE rules of origin. Another possible indication of the impact of rules of origin are the “fill rates” for each region’s quotas (known as “tariff preference levels”). Within Africa, the LDCs that qualify for liberalized rules of origin allowing “third country” (non.-U.S., non-AGOA) fabric and yarn to be used in apparel and still qualify for duty-free entry under AGOA had achieved a relatively high 43.3 percent “fill rate” for their quotas in 2006, versus other African suppliers, which must use domestic African or U.S. inputs, whose fill rate stood at 1.8 percent. Recent economic literature also suggests that AGOA had some success in increasing export activity for some countries, but the increased exports are mainly associated with the liberalized apparel provisions.” Yet others are concerned that without requiring more Sub-Saharan African value-added (e.g., through local sourcing and production), the trade, investment, and supply linkages to the local economy that foster development and diversification may not accrue to AGOA beneficiaries. As a result, the recent long-term extension of the third country fabric provision was accompanied by a new requirement to use fabrics deemed widely available for commercial use (i.e., in “abundant supply”) in Africa. However, at a recent ITC hearing, a major U.S. jeans manufacturer expressed concern that the limitations the law places on their flexibility to source fabric is making them reluctant to continue purchasing from African producers. Our fieldwork revealed examples where complex rules-of-origin requirements appear to be complicating preference trade. In Ghana, for example, we met with a firm that decorates T-shirts with original designs, using traditional African decorative techniques. This firm had been importing plain white T-shirts from Honduras to decorate in Ghana and then exporting them to the United States. We were surprised to learn that the firm had to pay duty on the finished product exported to the United States, since the inputs were exempt from tariffs under U.S. preferences programs. For example, the plain white T-shirts manufactured in Honduras would have entered the United States duty-free under CBI. The value-added through the decorative process in Ghana would also be exempt from duties under AGOA. However, because the T-shirt manufactured in Honduras did not meet the rules of origin requirements for the AGOA program this company was obliged to pay duty on the finished decorated shirts. The company is now seeking to shift its T-shirt purchases to South Africa, or another AGOA beneficiary, since this sourcing would enable them to qualify for duty-free treatment under AGOA. On the other hand, liberalizing quotas and rules of origin have been a principal means by which the regional programs have been improved in recent years. For example, CBTPA was enacted in 2000 to enhance the CBI program, and temporarily eliminates tariffs and most quantitative restrictions on certain products. The CBTPA liberalized rules of origin for certain textiles and apparel in an effort to mitigate adverse effects on CBI suppliers caused by diversion of production and U.S. trade to Mexico when the North American Free Trade Agreement (NAFTA) entered into force. The change in rules appears to have benefited CBI suppliers somewhat. Notably, items entering under the CBTPA, such as cotton T-shirts and trousers had become leading imports from Central America and the Dominican Republic at the time the ITC assessed the impact of CAFTA-DR in 2005. Yet, apparel and footwear were also the Central American sectors expected to benefit most from further liberalization of U.S. access under CAFTA-DR. Notably, CAFTA-DR attempted to sustain and encourage subregional integration within the Americas by further loosening rules of origin to allow “cumulation” (adding together the value) of inputs from United States, CAFTA-DR, NAFTA, and CBI suppliers to meet its rules of origin. Bringing such attempted improvements in opportunities to fruition remains complex. In our visit to Haiti, for example, there was uncertainty as to how CAFTA-DR will interact with Haiti’s new HOPE program. In particular, concern was expressed over whether existing production- sharing operations between the Dominican Republic and Haiti would be eligible for duty-free entry. Our analysis of the share of preference eligible imports actually entering under each preference program shows that the benefit of U.S. preference programs may vary considerably by program and partner. Figure 11 shows the 2006 utilization of U.S. preference programs where the “utilization rate” is defined as the ratio of actual preference imports under each program to eligible imports. As Figure 11 indicates, the utilization rate for the regional preference programs offered by the United States is high, particularly relative to the utilization of GSP. To some extent, low utilization of GSP may reflect the fact that coverage across programs is relatively uniform for many products, whereas program conditions and rules of origin vary. As a result, countries that have access to both GSP and regional programs may opt to use the regional programs. The utilization rate for GSP or GSPLDC imports from all eligible partners was 61 percent. The utilization rate for imports from countries eligible for only GSP or GSPLDC was slightly higher, at about 75 percent. Countries eligible for GSPLDC, with enhanced duty-free access, had a utilization rate of 58 percent. Countries that were eligible for AGOA and CBI/CBTPA had utilization rates of 77 percent and 47 percent, respectively. The four Andean countries eligible for ATPA/ATPDEA had the highest utilization rate of 90 percent. Our analysis of data for each program (see table 6) shows variation in utilization of the programs across eligible countries in 2006. In brief, our analysis finds the following: GSP or GSPLDC—Analysis of GSP shows that low-income countries are well represented among the top countries in terms of utilization rates, as 9 of the 35 countries with high utilization rates are designated low income. The utilization rates of the leading GSP exporters to the United States in terms of value vary widely, ranging from 99 percent (Zimbabwe) to 9 percent (Chad). AGOA—Nigeria uses AGOA for 96 percent of its exports to the United States and dominates the share of U.S. imports under the program. Other major suppliers are Angola, Chad, and Gabon. The program appears to be highly utilized for exports of others with lesser-valued imports, including Botswana, Cameroon, Cape Verde, Ethiopia, Kenya, Lesotho, Madagascar, Mauritius, Mozambique, South Africa, Senegal, Swaziland, Tanzania, and Uganda. Perhaps, as a reflection of weaknesses in the trade capacities of some AGOA eligible countries, 12 of the 38 AGOA eligible countries (Benin, Burundi, Djibouti, Gambia, Guinea-Bissau, Guinea, Liberia, Rwanda, Sao Tome and Principe, Seychelles, Sierra Leone, and Democratic Republic of the Congo (formerly Zaire) did not export under the program, though several did export under GSP. ATPA/ATPDEA—Most of the approximately $13.5 billion in U.S. preference imports under ATPA/ATPDEA came from three beneficiaries. However, utilization of the program by all beneficiary countries, including Bolivia, is relatively high. CBI/CBTPA—The CBI/CBTPA preference program is the most varied regional program in terms of the development status of eligible countries as seven of the countries eligible for CBI are high income, one (Haiti) is low income, and the rest are middle-income countries. However, Trinidad and Tobago—a high-income country—is the leading supplier and has the highest utilization rate under this preference program. Percentage of total U.S. In addition to the individual named above, the following persons made major contributions to this report: Kim Frankena, Assistant Director; Juan Gobel, Assistant Director; Ann Baker; Gezahegne Bekele; Ken Bombara; Karen Deans; Etana Finkler; Richard Gifford Howland; Ernie Jackson; Marisela Perez; and Celia Thomas. The team also benefited from the expert advice and assistance of Martin de Alteriis, Susan Offutt, and Mark Speight.
U.S. trade preference programs promote economic development in poorer nations by providing export opportunities. The Generalized System of Preferences, Caribbean Basin Initiative, Andean Trade Preference Act, and African Growth and Opportunity Act unilaterally reduce U.S. tariffs for many products from over 130 countries. However, three of these programs expire partially or in full this year, and Congress is exploring options as it considers renewal. GAO was asked to review the programs' effects on the United States and on foreign beneficiaries' exports and development, identify policy trade-offs concerning these programs, and evaluate the overall U.S. approach to preference programs. To address these objectives, we analyzed trade data, reviewed trade literature and program documents, interviewed U.S. officials, and did fieldwork in six countries. Overall, trade preference programs have a small effect on the U.S. economy. Some U.S. industries have shared-production arrangements with foreign producers that depend on preference benefits, while others compete with preference imports. Preference programs are used to advance U.S. goals, such as intellectual property rights protection. Developing countries extensively use preferential access to boost exports to the United States. Preference imports have grown faster than overall U.S. imports, and recent changes in product coverage have expanded beneficiaries' export opportunities. Gaps in duty-free access continue for sectors such as agriculture and apparel. Preference exports remain concentrated in a few countries and products, but trends indicate greater diversification and increased use by the poorest countries. Those GAO interviewed in beneficiary countries also stressed the benefits derived from preferences. Preference programs balance two key policy trade-offs. First, programs offer duty-free access to the U.S. market to increase beneficiaries' trade, while attempting not to harm U.S. industries. Second, Congress faces a trade-off between longer program renewals, which may encourage investment, and shorter renewals, which may provide more opportunities to change the programs to meet evolving priorities. Finally, some beneficiary countries' concerns over the eroding value of preferences must be weighed against the likely greater economic benefits of broader trade liberalization. Trade preference programs have proliferated over time, becoming more complex, but neither Congress nor the interagency Trade Policy Staff Committee that manages the programs has formally considered them as a whole. Responsive to their legal mandates, the Office of the U.S. Trade Representative (USTR) and other agencies use different approaches to monitor compliance with program criteria, resulting in disconnected review processes and gaps in reporting on some countries and issues. Separate reporting and examination also hinder measuring programs' contribution to economic development.
In 1986, Congress amended Title IV-E of the Social Security Act to authorize federal funds targeted to assist youth aged 16 and over in making the transition from foster care to living independent of the child welfare system and created the Independent Living Program. This program was designed to prepare adolescents in foster care to live self-sufficiently once they exited the child welfare system. Several amendments were made to the Independent Living Program over the years, but the passage of FCIA and the creation of the Chafee Program represented the most significant changes in the federal Independent Living Program since its creation. FCIA doubled the federal funds available for independent living programs to $140 million each year. These funds are allocated to states based on their share of the nation’s foster care population. In addition to providing increased funding, FCIA eliminated the minimum age limit of 16 years and provided states with the flexibility to define the age at which children in foster care are eligible for services to help them prepare for independent living, as long as services are provided to youth who are likely to remain in foster care until 18 years of age. The law also provided several new services to help youth make the transition to adulthood. It allowed states to use up to 30 percent of their state allotment for room and board for former foster care youth up to age 21. It allowed states the option to expand Medicaid coverage to former foster care adolescents between 18 and 21. Title IV-E was amended again in 2002 to provide foster care youth vouchers for postsecondary education and training under the Education and Training Vouchers (ETV) program and authorized an additional $60 million for states to provide postsecondary education and training vouchers up to $5,000 per year per youth. Eligible participants include youth otherwise eligible for services under the states’ Chafee Programs, youth adopted from foster care after attaining the age of 16, and youth participating in the voucher program on their 21st birthday (until they turn 23 years old) as long as they are enrolled in a postsecondary education or training program and are making satisfactory progress toward completion of that program. In addition, the law required that states make every effort to coordinate their Chafee Programs with other federal and state programs for youth, such as the Runaway and Homeless Youth Program, abstinence education programs, local housing programs, programs for disabled youth, and school-to-work programs offered by high schools or local workforce agencies. Further, states were required to coordinate their programs with each Indian tribe in the state and offer the state’s independent living services to Indian children. To receive funds under the Chafee Program, states were required to develop multiyear plans describing how they would design and deliver programs and to submit program certifications. The multiyear Chafee plans must include a description of the state’s program design, including its goals, strategies, and its implementation plan for achieving the purposes of the law. States were also required to certify that they would operate a statewide independent living program that complied with the specific aspects of the law, such as providing training to foster parents, adoptive parents, workers in group homes, and case managers on issues confronting adolescents preparing for independent living. Further, to receive annual funds, ACF required states to submit annual reports that described the services provided and activities conducted under their Chafee Programs, including information on any program modifications and their current status of implementation; provide a record of how funds were expended; and include a description of the extent to which the funds assisted youth age 18 to 21 in making the transition to self-sufficiency. FCIA also required that HHS develop and implement a plan to collect information needed to effectively monitor and measure a state’s performance, including the characteristics of youth served by independent living programs, the services delivered, and the outcomes achieved. Further, FCIA required HHS to conduct evaluations of independent living programs deemed to be innovative or of potential national significance using rigorous scientific standards to the maximum extent practicable, such as random assignment to treatment and control groups. While overall federal funding for state independent living programs doubled with the passage of FCIA, there were significant variations in the changes to state allocations, and the maximum amount of funds available at the time of our 2004 report for each eligible foster care youth ranged between $476 and $2,300. Under the previous independent living program, states received funds ranging from $13,000 in Alaska to more than $12 million in California. In the first year of funding under FCIA, Alaska and 8 other states received the guaranteed minimum of $500,000, while California received more than $27 million (see table 1). Some states were unable to spend all of their federal allocations in the first 2 years of increased funding under the program. For example, in 2001, 20 states returned nearly $10 million in federal funding to HHS, and in 2002, 13 states returned more than $4 million. ACF regional officials reported that one reason for these unspent funds was that some states did not initially have the infrastructure in place to quickly absorb the influx of funds. Data provided in a July 2007 Congressional Research Service memo to Congress showed that 9 states returned less than 1 percent of total Chafee funding in 2004 (see app. I). At the time of our 2004 report, we could not determine the exact amount of funding states had available to spend on each youth eligible for independent living services because of the lack of data on eligible youth emancipated from foster care. However, available data at that time on youth in foster care suggest that states may have different amounts of funds available for services to youth in foster care. We compared each state’s 2004 FCIA allocation with its 2002 population of eligible youth in foster care. This comparison showed that maximum funding for independent living services ranged from $476 per foster care youth in West Virginia to almost $2,300 per foster care youth in Montana. These differences were due in part to the new provision that allowed states to define the age ranges within which youth were eligible for independent living services. For example, 4 states reported in our survey offering independent living services to youth at age 12, while 27 states reported offering services at age 14. In addition, the funding formula is based on the total number of all children in foster care. However, some states have a larger share of youth eligible for independent living services than other states, even when their eligibility age range is the same. For example, of the 15 states reporting in our survey that youth are eligible for services between the ages of 14 and 21, 3 states had 25 percent or less of their foster care population within this age range, while in 3 other states, this age range accounted for over 40 percent of the total foster care population. In our 2004 survey, 40 states reported expanding services to youth younger than they had previously served, and 36 states reported serving older youth, but states reported service gaps in critical areas, such as mental health and housing. The number of states that reported providing core independent living services, such as independent living skills assessments, daily living skills training, and counseling to youth younger than 16 more than doubled after FCIA. Similarly, more states reported offering these supports and services to youth who were emancipated from foster care. Many states also began to offer the new services to support youth that emancipated from foster care. These services include the Education and Training Vouchers, Medicaid health insurance, and assistance with room and board. ETV: All states, the District of Columbia, and Puerto Rico began receiving funds under the ETV program to assist youth seeking postsecondary education, but 26 states did not spend all of the funding received (see app. II). A report from the National Resource Center for Youth Development showed that states provide a range of benefits to youth eligible for ETVs. Over 90 percent of 38 state independent living coordinators responding to a survey reported offering financial support to youth for room and board, school supplies, equipment and uniforms, school-related fees, and transportation costs. Eighty-four percent of states made payments for child care for the dependents of youth, and 60 percent of state reported making payments for college or university health plans on behalf of youth. States were challenged to spend all of their funding allotment. Mississippi returned almost all of its 2004 ETV funds, and 14 other states returned over 20 percent of their funding allotment. Medicaid: Recent information from the American Public Human Services Association shows that all states are now using or planning to use the Chafee option or other means to extend Medicaid coverage to youth. In our 2004 survey, 31 of 50 state independent living coordinators had reported offering Medicaid benefits to at least some emancipated youth to help them maintain access to health care benefits while they transitioned to independence. In 2007, the American Public Human Services Association reported that 22 states planned or have already started using the Chafee option to offer Medicaid coverage to youth who age out of foster care. The study also found that the remaining 28 states and the District of Columbia were reported to be using other methods, such as the State Children’s Health Insurance Program or the Medicaid waiver demonstration program, to extend coverage to youth. Housing assistance: In our 2004 survey, 46 states reported that they offered assistance with room and board to youth who had been emancipated from foster care, and the 4 states we visited reported offering a range of housing supports to assist youth. At the time of our visit, Connecticut provided several housing options to meet the needs of youth at varying levels of independence, including group homes, supervised apartment sites, and unsupervised apartment sites with periodic visits from case managers. While 3 other states we visited offered a more limited supply of housing options, all provided some type of housing subsidy or placement. Existing services: Chafee Program funds were also used to improve the quality of existing independent living services and refocus the attention of their programs, according to state officials we visited. For example, local officials in Florida said that prior to FCIA, training in daily living skills was provided haphazardly, and in many cases unqualified staff taught classes even though such training was considered a core component of their independent living program. At the time of our visit, Florida officials said that the state redesigned staff training, improved instructor quality, and was better prepared to provide youth with the skills necessary to live independently outside of the foster care system. States differed in the proportion of eligible youth served under their respective independent living programs. In our 2004 survey, 40 states reported serving about 56,000 youth—or approximately 44 percent of youth in foster care who were eligible for independent living services in these states. About one-third of reporting states were serving less than half of their eligible foster care youth population, while an equal percentage of states were serving three-fourths or more. While states expanded eligibility to younger youth, most services continued to be directed at youth age 16 and older in most of the states we visited. Certain gaps in the availability of critical services were reported, which may have contributed to the challenge of serving higher numbers of eligible youth. States also reported that these challenges were more prominent in rural areas. Service gaps included the following: Mental health services: Youth in foster care often require mental health services continuing beyond emancipation. However, states continue to be challenged in providing youth with a smooth transition between the youth and adult mental health systems. Of the 4 states we visited in 2004, 3 cited difficulties due to more stringent eligibility requirements in the adult system, different levels of services, and long waiting lists for services. Challenges with mental health services remained in 2006, when 32 state child welfare directors responding to our survey reported dissatisfaction with the level of mental health services. Mentoring services: Research studies indicate that the presence of positive adult role models is critical for youth in foster care because family separations and placement disruptions have been found to hinder the development of enduring bonds. Although the majority of states reported in our 2004 survey that they offered mentoring programs to youth, officials in the states we visited cited challenges in providing all youth with access to mentoring programs to establish and maintain such relationships. For example, in Connecticut, one program director reported challenges recruiting adults to serve as mentors, especially men willing to make a 1-year commitment to an adolescent boy. In addition, some state and local officials and service providers seemed unclear on what should be included in a high-quality mentoring program and how to identify qualified service providers. Securing safe and suitable housing: Providing appropriate housing also remains a critical service gap. Youth we spoke with across the 4 states we visited in 2004 said that locating safe and stable housing after leaving foster care was one of their primary concerns in their transition to independence, and state officials reported challenges meeting youths’ housing needs. Youth reported difficulties renting housing because of a lack of an employment history, a credit history, or a cosigner. State and local officials in the states we visited said the availability of housing resources for foster youth during their initial transition from foster care depended on where they lived, and in some cases the benefits provided did not completely meet the needs of youth, or were available only to certain youth. For example, at the time of our visit, local officials in Washington reported that housing subsidies may not completely offset expenses for youth in expensive urban areas, like Seattle, and that rental housing in some rural areas was scarce. This service gap was identified by states again in our 2006 survey, as 31 state child welfare directors reported dissatisfaction with the level of housing for foster youth transitioning to independence. Youth and foster family engagement: State and local officials, as well as service providers in the 4 states we visited said that it was difficult to get some youth to participate in the independent living programs and that foster parents were sometimes reluctant partners. While youth were generally offered incentives, such as cash stipends, to participate in daily living skills training or other activities, officials emphasized that participation is voluntary and it is critical for foster parents to support and encourage youth participation in the program. After FCIA, 49 states reported increased coordination with a number of federal, state, and local programs that can provide or supplement independent living services, but officials from the 4 states we visited reported several barriers in developing the linkages necessary to access services under these programs across local areas. States we surveyed reported working with a range of service provides, such as Job Corps, workforce boards, and local housing agencies. States we visited used different strategies to develop linkages among state youth programs. Three of the states we visited reported establishing state- level work groups that included representatives from the independent living program and other state agencies to bring agency officials together to discuss the needs of youth in foster care and possible strategies for improving service delivery. For example, Florida’s legislature mandated a state-level work group to facilitate information sharing at the state level among various agencies, such as the State Departments of Children and Families and Education, the Agency for Workforce Innovation, and the Agency for Health Care Administration. Additional strategies states developed to establish linkages with other federal, state, or local programs included establishing liaisons between agencies or programs or through less formal collaborative arrangements. Officials also reported developing linkages with other private resources in their communities, such as business owners, to provide services to youth in the independent living program. Despite states’ efforts, we continued to find in our 2006 survey that states were least likely to address challenges in providing services such as mental health that are typically provided outside of the child welfare system by other agencies. Officials in the 4 states we visited in 2004 reported several barriers that hinder their ability to establish linkages with other agencies and programs, including the lack of information on the array of programs available in each state or local area and differences in program priorities. Officials from 3 states said that they relied on local officials to identify potential partners and initiate and maintain coordination efforts, and while individuals in some local areas may have developed successful collaborations with service providers in their area, these relationships have not always been expanded statewide. To some extent, this has been due to the fact that state and local child welfare officials differ in their awareness of resources available from other agencies. Some gaps in awareness may be partly due to turnover rates for caseworkers reported by the states we visited. Caseworkers’ lack of knowledge about available programs may have contributed to foster parents and youth reporting that they were unaware of the array of services available from other federal, state, or local programs. In addition, officials cited barriers to establishing linkages with other federal and state programs because of different program priorities. Differences in performance goals among programs can affect the ability of independent living staff to obtain services for foster youth from other agencies. In North Carolina, state officials we visited in 2006 said that about 70 percent of children and families in the child welfare system received services from multiple public agencies, and the Catalog of Domestic Assistance (CFDA)—a repository of information on all federal assistance programs— lists over 300 federal programs that provide youth and family services. In October 2003, the White House Task Force for Disadvantaged Youth recommended that the CFDA be modified to provide a search feature that can be used to identify locations where federally funded programs were operating. All states developed multiyear plans as required under FCIA and submitted annual progress reports to ACF for their independent living programs, but the absence of standard comprehensive information within and across state plans and reports precludes using them at the state and federal levels to monitor how well the programs are working to serve foster care youth. HHS has not yet implemented its plan to collect information to measure states’ program performance, and while some states reported collecting some data, states have experienced difficulties in contacting youth to determine their outcomes. HHS has begun to evaluate selected independent living programs. State plans and annual reports: All states developed state plans as required by FCIA that described independent living services they planned to provide to foster care youth and submitted annual reports to ACF, but for several reasons, these plans and reports cannot be used to assess states’ independent living programs. While ACF officials stated that the plans and annual reports served as the primary method the agency used to monitor states’ use of the Chafee Program funds, ACF did not require states to use a uniform reporting format, set specific baselines for measuring progress, or report on youths’ outcomes. As a result, each state developed plans and reports that varied in their scope and level of detail, making it difficult to determine whether states had made progress in preparing foster youth to live self- sufficiently. On the basis of our review of plans from all 50 states and the District of Columbia covering federal fiscal years 2001 through 2004, and annual reports for 45 states from federal fiscal years 2001 and 2002, we found the following: Few states both organized the information in their plans to address the purposes of FCIA and presented specific strategies they would use to meet these purposes. The plans vary in their usefulness in establishing outcomes the states intended to achieve for youth. Annual reports for all 45 states contained information that did not directly relate to information in their state plan, making it unclear whether the differences were due to service changes or missing information. Of the 90 annual progress reports we reviewed, 52 reports did not include clear data that could be used to determine progress toward meeting the goals of the states’ independent living programs. ACF officials said that they recognize the limitations of these documents as tools to monitor states’ use of independent living program funds, but explained that they rely on states’ to self-certify that their independent living programs adhere to FCIA requirements. Staff in ACF’s 10 regional offices conduct direct oversight of the program by reviewing the plans and reports, interpreting guidance, and communicating with the states. However, officials in three offices reported during our 2004 review that their review of the documents was cursory and that the plans and annual reports do not serve as effective monitoring tools. In addition, ACF officials reported that the Child and Family Services Review (CFSR) used to evaluate the states’ overall child welfare systems could serve as a tool to monitor independent living programs, but the CFSR is limited in the type and amount of data collected on youth receiving independent living services. National Youth in Transition Database: ACF has not completed efforts to develop a plan to collect data on youths’ characteristics, services, and outcomes in response to the FCIA requirement, and some states that are attempting to collect information on youths’ outcomes are experiencing difficulties. In 2000, ACF started to develop the National Youth in Transition Database (NYTD) to collect information needed to effectively monitor and measure states’ performance in operating independent living programs. The agency issued proposed rules on July 14, 2006, but as of July 2007, final rules governing the system have not been issued. The proposed rules include an approach to collect information on all youth who received independent living services, youth who are in foster care at age 17, and follow-up information on youth at ages 19 and 21. For any youth who receives independent living services from either the child welfare agency or another source supported by federal Chafee funds, the state must report a series of data elements, including the type of independent living services received, such as housing education or health education and risk prevention. These data are to be collected on an ongoing basis for as long as the youth receives services. In order to develop a system to identify youth outcomes, HHS proposes establishing information on a baseline population of youth at age 17. All youth who turn 17 years old while in foster care would be surveyed on a series of outcomes, such as their current employment status. States would be required to conduct follow-up surveys with the youth at ages 19 and 21. HHS would allow the states to pull a sample from this baseline population with which to conduct these follow-up surveys. For example, California had over 7,500 youth in care in 2004 who were 17 years old. On the basis of the proposed sampling methodology, the state would be allowed to survey a minimum of 341 19-year-olds in the follow-up effort. According to results from our survey, in federal fiscal year 2003, 30 states attempted to contact youth who had been emancipated from foster care for initial information to determine their status, including education and employment outcomes. Of those states, most reported that they were unsuccessful in contacting more than half of the youth. Further, 21 states reported attempting to follow up with emancipated youth after a longer period of time had elapsed but had trouble reaching all the youth. Similarly, officials in the states we visited reported that collecting outcome data is especially challenging since there is little they can do to find youth unless the youth themselves initiate the contact. Further, some officials were concerned about the value of the outcome data since they believe that youth who are doing well are more likely to participate in the follow-up interviews, thus skewing the results. When HHS issued the proposed rule, it provided strategies states could use to conduct the follow-up component of the NYTD requirements. For example, the document recommends letting the youth know up-front that the agency will be contacting them in the future; suggests keeping a “case file” that tracks any activity, such as reasons why a letter was returned; and suggests that the agency establish a toll-free phone line. Mutltistate evaluations: At the time of our 2004 review, ACF expected to complete the evaluations of four approaches to delivering independent living services by December 2007. However, it is unclear if that deadline will be achieved at this point. As required by FCIA, these evaluations are expected to use rigorous scientific standards, such as an experimental research design that randomly assigns youth in independent living programs to different groups: one that is administered the experimental treatment and one that is not. HHS initiated this effort in 2001 with a nationwide review of potentially promising approaches to delivering independent living services. HHS contracted with a research institute to conduct a nationwide search to identify independent living programs that meet the criteria of the evaluation and to conduct 5-year evaluations of the selected programs. On the basis of the search and the established criteria, HHS selected four programs for the evaluation (see table 2). In the report issued in 2004, we made recommendations to HHS (1) to make information available to states and local areas about other federal programs that may assist youth in their transition to self-sufficiency and provide guidance on how to access services under these programs and (2) to develop a standard reporting format for state plans and progress reports and implement a uniform process regional offices can use to assess states’ progress in meeting the needs of youth in foster care and those recently emancipated from care. These recommendations have not been implemented. Preparing youth to successfully transition to independence is a daunting task that requires coordinated and continuous services across many social service systems including child welfare, health, education, and housing. The Chafee Program has provided a single funding stream that can be used to meet service needs across these social systems. However, this funding alone is not sufficient to overcome state challenges in meeting the varied service needs of emancipating youth. The child welfare system must work with housing agencies to remove barriers faced by youth with no employment history or cosigner, and with health agencies, to ensure a smooth transition between the youth and adult mental health systems. In addition, states continue to have difficulty building adequate service capacity for housing and mental health in all locales, and child welfare staff still struggle to identify the myriad of public and private sector programs that exist to assist youth. Our November 2004 report and our May 2007 testimony present recommendations we made to HHS to make information available to states and local areas about other federal programs that may assist youth in their transition to self-sufficiency. HHS did not comment on our 2004 recommendation, but disagreed with our recent recommendation to improve awareness of and access to various social services funded by the federal government. HHS stated that the recommendation was insufficient to address the need for additional services, and incorrectly implied that local child welfare agencies were not already aware of and using such resources. We acknowledged that increasing awareness of existing federal resources is not the only action needed, but in the course of our work across the years, continue to find that caseworkers are sometimes unaware of the full array of federal resources, such as health and housing, available in their locale, or had not coordinated with other agencies to use them. We continue to support the view that federal action, such as modifying the CFDA, would allow caseworkers and others to more easily identify services and service providers funded by federal agencies in closest proximity to the youth and families they serve. How well the Chafee Program has worked to improve outcomes for emancipated youth among states is still unknown 8 years after the passage of FCIA, and HHS has not yet implemented its information system that is intended to meet FCIA requirements for collecting and monitoring a state’s performance. Given the significant variation in the number of youth served and services provided across states, an interim system for measuring state progress would seem to be warranted. However, while HHS has an oversight process to measure outcomes of state child welfare systems as a whole, this process no longer includes measures required by FCIA. Similarly, while ACF’s regional offices conduct much of the federal oversight for the Chafee Program, the oversight tools currently in place do not provide standard information needed to measure and compare performance across states. Our 2004 report included a recommendation to develop a standard reporting format for state plans and progress reports and implement a uniform process regional offices can use to assess states’ progress in meeting the needs of youth in foster care and those recently emancipated from care. These recommendations have not been implemented. HHS continues to disagree with our recommendation to develop a standard reporting format for state plans and progress reports, stating that such action would be overly prescriptive and impose an unnecessary burden on states. However, as reflected in our 2004 report, we continue to believe that strengthening the state reporting process is needed to provide some assurance of program accountability at the state and federal levels. HHS had agreed with our recommendation to establish a uniform process regional offices can use to assess states’ progress and said that in 2005, ACF would develop and provide a review protocol to be used in regional office desk reviews of states’ annual progress reports. However, ACF officials reported that they have not yet implemented such a review protocol. Mr. Chairman, this concludes my statement. I will be pleased to respond to any questions you or other members of the subcommittee may have. For further information, please contact Cornelia Ashby or Kay Brown at (202) 512-7215. Individuals making key contributions to this testimony include Lacinda Ayers and Sara L. Schibanoff. Percentage of allotment returned to the U.S. 6.5 Percentage of allotment returned to the U.S. Dollar amount returned to the U.S. Treasury 0 (0.01) $137,900,000 a. 0% (.001%) The total mandatory funds for this program are $140 million. However, the statute provides that a certain percentage of those funds be set aside for HHS to conduct (or fund) research, evaluation, and technical assistance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress passed the Foster Care Independence Act of 1999 (FCIA), which doubled annual federal funds for independent living programs to $140 million. This testimony discusses (1) states' FCIA funding allocations, (2) services provided and remaining challenges, (3) state coordination of programs to deliver services, and (4) the states and the Department of Health and Human Services' (HHS) Administration for Children and Families' (ACF) progress toward meeting program accountability requirements. This testimony is primarily based on our 2004 report on FCIA (05-25), with updated information from our 2007 testimony on state child welfare challenges (07-850T). To conduct the 2004 work, we surveyed state independent living coordinators, conducted 4 state site visits, and reviewed states' plans and annual reports. Updated information from our 2007 testimony was taken primarily from a 2006 survey of state child welfare directors. States' funding allocations for independent living programs effectively ranged from a maximum of approximately $500 to $2,300 for each foster care youth who was eligible for independent living services, according to data available at the time of our 2004 report. Funding varied because of differences in states' eligibility requirements and the funding formula used to allocate funds. Although our 2004 survey of state independent living coordinators showed that 40 states reported expanding existing independent living services to younger youth and 36 states reported serving youth older than they had previously served, states varied in their ability to engage youth and to provide key services. About one-third of reporting states were serving less than half of their eligible foster care youth population, while an equal percentage of states were serving three-fourths or more. Our 2006 survey of state child welfare directors showed that critical gaps remain in providing services such as mental health and housing for youth transitioning to independence. Mental health barriers included differences in eligibility requirements and level of services between the youth and adult systems, and long waiting lists. Housing barriers included limited affordable housing in costly urban areas, scarce rental housing in rural areas, and problems obtaining a rental lease due to the lack of youth employment and credit history or a co-signer to guarantee payment. Almost all states that we surveyed in 2004 reported an increase in coordination with some federal, state, and local programs, but linkages with other federal and state youth-serving programs were not always in place to increase services available across local areas. Many programs exist at the federal, state, or local level that can be used to provide or supplement independent living services, and each state reported in our survey using some of these programs to provide services. Despite these coordination efforts, some states may not make full use of the available resources. Inconsistent availability of information on the array of programs that were operating in each state and local area was cited as a challenge in promoting coordination in both our prior and more current work. States and HHS have taken action to fulfill the accountability provisions of FCIA, but 8 years later, little information is available to assess program outcomes. All states developed multiyear plans for their programs and submitted annual reports, but using these documents to assess state performance was hindered by inconsistencies between the plans and reports, an absence of goals and baseline information to measure progress, and incomplete information on outcomes for the youth serviced. ACF started developing an information system in 2000 to monitor state performance, but final regulations directing states to begin collecting data and tracking outcomes are still pending. ACF is also conducting evaluations of selected independent living programs, but results are not yet available.
In an effort to expedite receipt processing, the IRS conducted its first pilot project to obtain lockbox services from a commercial bank in 1984. The receipts processed were limited to tax receipts for estimated tax payments, which are typically paid by taxpayers on a quarterly basis. The bank was compensated from the interest it earned on a compensating balance— funds placed in the bank’s account by FMS. Since that time, the IRS lockbox program has expanded to cover taxpayers in all states and receipts for individual income tax returns, employment tax returns, and other miscellaneous types of taxes. Most of the returns are received during the April peak processing period and the smaller peak periods during January, June, and September. Certain taxpayers who owe money and are making payments are instructed to mail returns and payments to post office boxes maintained by the lockbox banks. The lockbox sites deposit the receipts to an account with Treasury and send processed documents (tax return forms and payment vouchers), computer tapes containing taxpayer data, and unprocessable receipts to IRS Submission Processing Centers for further processing and recording in the taxpayer accounts (see fig. 1). FMS formalized the lockbox processing arrangements in 1993 by establishing contractual agreements with commercial banks to process tax receipts on behalf of IRS. In 2002, new, but similar, agreements were established. Like the 1993 agreements, the 2002 agreements are 5-year agreements with two 1-year extension options. The current lockbox network consists of four banks, three of which operate multiple sites that support the 10 IRS Submission Processing Centers across the country. The agreements with FMS require that these banks operate their sites according to IRS’s Lockbox Processing Guidelines (LPG). The LPGs provide the detailed procedures the banks are required to follow in providing lockbox services for IRS and are updated as needed. Both FMS and IRS monitor lockbox bank compliance with the agreements and the LPGs, and each lockbox site has an IRS employee who serves as a lockbox coordinator. For fiscal year 2002, IRS lockbox banks processed more than 66 million receipts, totaling about $268 billion, which accounted for approximately 13 percent of total tax receipts in dollars and 32 percent in volume. The intent of the lockbox program is to enhance federal cash management by accelerating the deposit of tax receipts, which would increase interest float savings (or interest cost avoidance) to the government and reduce the amount Treasury would have to borrow to pay government obligations. The estimate of interest float savings resulting from IRS’s use of lockbox banks has varied throughout the years. In calculating interest float savings, IRS and FMS assumed, based on a 1988 joint IRS/FMS study, that lockbox banks could process receipts and deposit the funds to a Treasury account 3 days faster than IRS. However, in a 1998 report, the Treasury Office of Inspector General (OIG) questioned the validity of this assumption and recommended that IRS acquire relevant and reliable comparative cost data on all aspects of the lockbox program to identify the most cost-effective option to use for processing and depositing tax receipts. In response to the Treasury OIG report, IRS and FMS hired a contractor to compare lockbox bank processing to IRS processing for individual tax (Form 1040) receipts. The contractor reported in July 1998 that lockbox banks could make funds available to the federal government an average of 2 days faster than IRS. In 1999, IRS and FMS formed a taskforce to study the costs and benefits of continuing to use lockbox banks for processing Form 1040 receipts. Based on its study, the IRS/FMS taskforce recommended that such processing remain with the lockbox banks rather than be returned to IRS for fiscal years 2001-2007. To accomplish our work, we reviewed reports relevant to oversight and management of the lockbox program, including reports prepared by FMS, IRS, GAO, the Treasury OIG, TIGTA, and internal and external auditors from several lockbox banks. We reviewed laws, regulations, and guidance related to the cash management activities of the federal government. We also interviewed FMS and IRS officials from several headquarters divisions. As agreed with your office, we did not review the incident involving the loss and destruction of taxpayer receipts and data at the Pittsburgh lockbox, since an ongoing investigation was in process. The following procedures were also performed for each of the objectives: To determine if new contractual agreements address previously identified problems and correct provisions that could contribute to improper handling of taxpayer returns, we reviewed and analyzed the provisions in the 1993 and 2002 lockbox bank contractual agreements. We also compared the agreements to determine whether changes had been made. To determine the adequacy of FMS’s and IRS’s oversight of lockbox banks, we reviewed FMS and IRS policies, guidelines, checklists, schedules of site visits, and reports from oversight reviews. We performed site visits at two IRS Submission Processing Centers to observe IRS reviews of documentation received from the lockbox banks. At each Submission Processing Center, we interviewed relevant management and staff concerning lockbox bank oversight policies, procedures, and practices. To determine if lockbox banks’ security and internal controls to safeguard taxpayer receipts and returns are sound and properly implemented, we observed physical security and internal controls and interviewed lockbox personnel at all nine lockbox locations during the April 2002 peak processing period and at two lockbox sites during the June 2002 peak processing period. At each site, we also reviewed lockbox bank employee personnel records for a nonrepresentative selection of permanent and temporary lockbox employees. In addition, we compared the 2001 and 2002 LPGs for changes related to safeguarding tax receipts and data, receipt processing, employee screening, and courier requirements. We also compared IRS’s Internal Revenue Manual (IRM) and other directives, which are IRS’s detailed policies, with the 2002 LPGs. To determine whether the costs and benefits of processing tax receipts through lockbox banks instead of processing them at IRS were considered, we reviewed federal guidance and economic literature on cost-benefit and cost-effectiveness analyses of federal programs and policies. In addition, we analyzed prior FMS and IRS studies and the support for the data, assumptions, and methodology used in the 1999 report to estimate the costs and benefits of processing tax receipts through lockbox banks versus processing them at IRS. We performed our work from April 2002 through November 2002 in accordance with U.S. generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of the Department of the Treasury or his designee. These comments are discussed in the Agency Comments and Our Evaluation section of this report, incorporated in the report as applicable, and reprinted in appendix III. We found nothing inherent in the new 2002 lockbox bank contractual agreements or the prior agreements that would necessarily contribute to mishandling of taxpayer receipts. The agreements contain penalty provisions that can result in negative consequences if banks do not perform work that meets quality standards or do not perform work within required time frames. The consequences range from financial penalties to termination of the lockbox agreement. Although a desire to avoid negative consequences could motivate lockbox bank employees to make poor decisions in their handling of taxpayer receipts, the penalty and termination provisions are necessary to help the federal government address inadequate contractor performance on the two performance dimensions—quality and timeliness—deemed critical by IRS and FMS. FMS and IRS made some enhancements to the 2002 agreements, such as the addition of a new performance penalty and clarification of other provisions. Because TIGTA’s investigation of the incident involving the loss and destruction of tax receipts by employees at the Pittsburgh lockbox site during the 2001 April peak processing period is still ongoing, it is unclear whether any provisions in the lockbox agreements may have contributed to the mishandling incident. When the results of the investigation are known, FMS and IRS should determine whether contract provisions need to be modified or whether additional controls need to be implemented. Factors used to assess contractor performance are cost, timeliness, and quality of service provided. Contracts often contain specific standards for acceptable performance as well as provisions for rewarding or penalizing contractors according to their performance in these areas. Such provisions may inadvertently encourage contractors to focus their efforts on one area to the detriment of their performance in other areas. For example, if a contract’s provisions reward timely performance more than they reward high-quality performance, contractors may be encouraged to take shortcuts that improve timeliness but detract from quality of work. As a result, it is important that contract incentives be appropriately designed and balanced to obtain acceptable levels of performance in all relevant areas. To ensure that contract provisions are operating as intended, effective oversight of contractors is essential. The lockbox banks are paid a fixed price for each item they process, and their total compensation depends on the number of each type of receipt they process. Lockbox banks have no direct influence over the volume of receipts. The compensation paid is therefore not a factor in measuring IRS lockbox performance, and the lockbox contractual agreements contain no direct incentives, positive or negative, related to cost. Lockbox bank performance is measured on timeliness and quality factors, specifically focusing on expediting the flow of funds to Treasury and ensuring that receipts are accurately processed. Because the basis for using lockbox banks was IRS’s and FMS’s belief that the banks could process tax receipts faster than IRS could, timeliness is a key measure of lockbox bank performance. Except during peak processing periods, the agreements require that lockbox banks deposit receipts within 24 hours of their receipt at the lockbox bank. During peak processing periods, all receipts must be processed and ready for deposit by the assigned program completion date (PCD). According to the terms of the lockbox contractual agreements, FMS may assess banks penalties or terminate their contractual agreements if they fail to meet these deadlines. The agreements also provide that the amount of any penalty assessed for late deposit of tax receipts shall be based on the amount of interest Treasury lost because of the delay. FMS imposed financial penalties on two lockbox banks for not meeting the PCD during the April 1997 peak processing period. Meeting quality standards is another critical aspect of IRS lockbox banks’ performance. Processing errors can place unnecessary burdens on taxpayers and delay processing of tax receipts. For example, a processing error might cause a lockbox bank to withdraw more funds from a taxpayer’s account than the amount actually written on the check. If the taxpayer’s account contained inadequate funds to cover the incorrect withdrawal amount, the taxpayer’s bank could assess penalties for insufficient funds. If the error caused the bank to withdraw less than the amount owed, IRS might erroneously assess the taxpayer interest and penalties for an incomplete payment. The lockbox agreement provisions allow FMS to assess lockbox banks financial penalties or to terminate an agreement for poor performance. FMS added a new provision to the 2002 lockbox agreements that is designed to facilitate reimbursement to IRS and FMS for costs they incur due to specific failures in performance, such as costs resulting from lockbox banks’ errors in processing tax receipts. In addition, the 2002 agreements clarified certain existing penalty provisions. We found nothing inherent in the 1993 or 2002 lockbox contractual agreements that would necessarily contribute to mishandling of taxpayer receipts. Although a desire to avoid negative consequences, such as financial penalties or contract termination, could cause lockbox bank employees to make poor decisions, penalty and termination provisions are necessary to help the federal government address inadequate contractor performance. To help ensure that contractors are adhering to contract terms and to reduce the risk that lockbox banks might compromise taxpayer data, effective oversight of lockbox sites is essential. The exact cause of the 2001 incident involving the loss and destruction of taxpayer data and receipts at the Pittsburgh lockbox site has yet to be officially reported. The site had a history of performance problems for which the bank had been assessed financial and other penalties. In 1997, FMS assessed the bank that operated the Pittsburgh site more than $1.4 million in penalties for failing to meet the assigned PCD and therefore delaying availability of funds to the Treasury. In September 2000, FMS placed the site on probation because of numerous uncorrected security violations, including commingling of corporate and other government agency processing with IRS processing. FMS, with IRS’s concurrence, removed the site from probationary status 2 ½ months later, after a site review conducted during the probationary period indicated that bank management had corrected all but one of the security violations. A subsequent, unannounced review by FMS and IRS 3 ½ months after the site was taken off probation also found that past violations of security requirements had not recurred and that the site was, for the most part, in compliance with security requirements. Nevertheless, approximately 2 months after this review, the Pittsburgh site was found to have lost or destroyed tens of thousands of tax returns, and, as a result, FMS terminated its contractual agreement for the site. As of September 30, 2002, IRS had spent over $4 million to obtain duplicate receipts and returns from the affected taxpayers, and the federal government had lost an estimated $13.5 million in interest as a result of the incident. In October 2002, Mellon Bank agreed to pay the government $18.1 million to cover administrative costs and expenses associated with the incident. However, TIGTA is still investigating the incident to determine whether criminal charges should be filed against any of the bank employees. As of November 2002, TIGTA and the applicable U.S. Attorney’s Office were unable to discuss this investigation with us. Until the investigation is completed, we cannot determine whether the site’s interpretation of contract provisions was a contributing factor to the 2001 incident or whether provisions need to be added or revised to help prevent a similar incident from occurring. FMS and IRS oversight of lockbox bank operations is a key control for ensuring that funds collected through the lockbox banks are protected against fraud and mismanagement. The oversight functions performed by FMS and IRS include various on-site and off-site reviews to ensure compliance with LPGs and contract terms, evaluation of requests for waivers from LPG requirements and proposed compensating controls, and enforcement of penalties against banks that fail to meet LPG and contract terms. In calendar year 2002, the agencies made significant improvements to their oversight of lockbox operations, mostly in response to the Pittsburgh lockbox incident. However, we found that the oversight of lockbox banks was not fully effective in protecting the government’s interests due to (1) a lack of clear directives and documented policies and procedures for various oversight functions, (2) key oversight functions not being performed, and (3) conflicting roles and responsibilities for IRS lockbox coordinators. These issues reduce the overall effectiveness of IRS and FMS oversight of lockbox banks. Additionally, the lack of clearly defined oversight requirements increases the risk that the oversight improvements made during 2002 may not continue in the future. According to IRS officials, they are in varying stages of completing several memoranda of understanding to identify and document oversight roles and responsibilities. Until these roles and responsibilities are agreed to and documented, however, oversight weaknesses are likely to continue to exist. FMS and IRS made significant improvements to the oversight of lockbox banks in 2002 compared with prior years. These improvements include (1) enhanced monitoring of peak processing operations, (2) involvement of key IRS officials in security reviews, (3) establishment of a new office with a full-time FMS official responsible for oversight of lockbox security, and (4) establishment of a new performance penalty provision to reimburse the government for poor quality performance. However, many of the improvements have not yet been institutionalized in the form of agency policies and procedures. As such, there is less assurance that this increased oversight will continue in the future. Prior to 2002, FMS’s on-site presence during peak processing periods was limited to only a few days each year and occurred near the end of the peak periods to ensure that production goals were achieved. IRS’s on-site presence at lockbox banks was generally limited to a lockbox coordinator, who was present for the duration of the peak period. However, these coordinators face competing demands of ensuring that lockbox sites promptly deposit tax payments, performing quality and compliance reviews, and assisting with other processing issues, such as training lockbox staff. In reaction to the Pittsburgh incident, FMS and IRS concluded that they needed more on-site presence at lockbox banks during peak operations. In April 2002, each lockbox site had at least one FMS official, one IRS headquarters official, and the site’s designated lockbox coordinator present for the entire April peak processing period. This increased on-site presence provided IRS and FMS with more comprehensive coverage of April peak operations. During 2002, FMS and IRS also placed a heavier emphasis on monitoring lockbox sites’ daily production status. According to IRS and FMS officials, their focus historically has been on monitoring a lockbox site’s ability to meet the overall April peak processing period’s PCD. IRS and FMS found that this approach presented problems because lockbox banks tended to address production problems during the peak processing period only when meeting the PCD was questionable. The solution employed by the banks was to bring in additional temporary staff near the end of the peak period to be able to meet the PCD. FMS and IRS officials indicated that their limited on-site presence affected the agencies’ ability to detect production problems on a real-time basis. Only after the peak period did the lockbox banks bring production issues to the attention of IRS and FMS. In 2002, FMS and IRS officials focused on monitoring lockbox banks’ daily production by reviewing production reports, observing production activity on the processing floor, and reporting production issues to IRS and FMS headquarters as soon as issues arose. During April 2002 and subsequent smaller peak periods in June and September 2002, each lockbox site also submitted, on a daily basis, an “FMS Daily Status Report.” These reports noted the daily status of critical production issues, such as staffing shortages and equipment problems, that could cause delays in the timely deposit of tax receipts or could affect performance. FMS and IRS headquarters officials reviewed these reports for potential problems and contacted lockbox management and on-site agency officials for follow-up and to facilitate timely resolutions of the problems. While these changes enhanced monitoring of peak operating periods, these improvements have not yet been incorporated into agency policies. Currently, only the lockbox coordinator is required by IRS policies to be on-site during the peak processing period. Personnel from FMS and IRS headquarters are not mandated by agency policies to be at lockbox sites during this time. Additionally, IRS officials indicated that they might not have adequate headquarters staff to assist with future on-site reviews. As a result, there is less assurance that increased on-site presence of staff at lockbox sites during peak processing periods will continue. After the 2001 Pittsburgh incident, IRS concluded that its participation in joint IRS/FMS annual unannounced security reviews should be performed by IRS staff with security expertise. Prior to 2002, personnel from IRS’s Wage and Investment Division, who have primary responsibility to manage the lockbox program, performed physical security reviews but did not have physical security expertise. During 2002, IRS’s Agency-Wide Shared Services (AWSS), which has staff with physical security expertise and performs physical security reviews at IRS Submission Processing Centers, began participating in the joint IRS/FMS unannounced security reviews. However, IRS policies only require the Wage and Investment Division to perform unannounced security reviews. AWSS’s participation in lockbox reviews is based only on an oral agreement to perform such reviews. As such, there is less assurance that AWSS will continue to perform lockbox security reviews in the future. Prior to 2002, FMS’s Financial Services Division (FSD), which had the administrative responsibility for negotiating and entering into lockbox contracts with financial institutions, also had the responsibility to oversee lockbox banks’ compliance with contract terms, such as security requirements. To effectively perform its oversight responsibilities, FMS recognized a need to establish a full-time position responsible for oversight of lockbox security. In August 2002, FMS formally established the Bank Review Office. The director and staff of this office are now responsible for FMS’s oversight of the security of federal government lockbox banks. Their responsibilities now include performing on-site reviews, following up on corrective actions to address review findings, and reviewing the adequacy of security requirements for lockbox banks. The director of this office now serves as the main FMS contact point on most oversight issues for IRS and lockbox banks. A new provision was added to the 2002 contractual agreements to assist the government in obtaining reimbursement from lockbox banks for direct costs incurred by IRS to correct performance failures on the part of lockbox banks. This provision enhances the government’s ability to penalize lockbox banks for poor quality work and be reimbursed for the additional costs IRS incurs to rework transactions erroneously processed by lockbox banks. Effective use of this provision requires additional guidelines and procedures to help management decide whether certain situations merit pursuing the penalty provision. Such guidelines and procedures should include (1) IRS’s expectations for unacceptable error rates, (2) procedures to identify and document lockbox errors, and (3) procedures to track IRS costs incurred as a result of rework. During 2002, IRS had not established guidelines and procedures for effective use of the new performance penalty provision. IRS cited three reasons why guidelines and procedures had not yet been established. First, IRS has not yet determined thresholds for unacceptable error rates. Second, IRS officials indicated that the agency had spent significant resources to document and build a legally defensible case to obtain reimbursement of costs related to the 2001 incident. Based on lessons learned from this case, IRS concluded that it needed to establish a more cost-effective means to accurately identify and document lockbox errors, as opposed to errors caused by taxpayers or IRS. During 2001, IRS tested a process to identify and document lockbox errors but concluded that the process was too labor-intensive and might not provide accurate and legally defensible data. Therefore, IRS is still exploring other methods to obtain these data. Finally, IRS officials explained that by law, reimbursements from lockbox banks would have to be remitted to the U.S. Treasury general fund and that they had explored legal options to keep the reimbursed funds for IRS’s own use. The reimbursement provision is a critical tool available to the government as a means to recover from lockbox banks costs incurred as a result of poor quality work and as an enforcement tool to encourage banks to implement effective controls and procedures to accurately process receipts. However, until the necessary guidelines and procedures are established, IRS and FMS cannot effectively use this oversight tool. As a result, the government may be paying for poor quality work and incurring additional costs to correct errors. Despite significant improvements, we found several instances where key oversight functions were not performed, which resulted in an increased risk of loss to the government and taxpayers. Specifically, (1) IRS and FMS did not take timely action on lockbox sites’ requests for waivers from LPG requirements, (2) IRS did not always participate in unannounced security reviews, (3) FMS did not always obtain formal responses from lockbox bank management to unannounced security reviews, and (4) IRS lockbox coordinators did not always complete reviews for peak processing periods. Additionally, the guidance used for these reviews needs to be strengthened. The LPGs set forth security and processing requirements for lockbox sites. They also allow deviations from these requirements if bank management submits a written waiver request to IRS and FMS. The bank must demonstrate its site’s legitimate inability to meet a requirement and must implement an alternate procedure or compensating control. In practice, after a bank submits a written waiver request stating the reason for its site’s inability to meet the LPG requirement and explaining its compensating control to mitigate risks of loss to the government, IRS and FMS allow the site to operate with its compensating control while they review the waiver request. However, because some waiver requests will eventually be denied after FMS and IRS conclude that the compensating controls are not adequate, some sites with inadequate controls are, in effect, allowed to operate in noncompliance with the LPGs until their waiver is officially denied. Therefore, to protect the government from losses resulting from a site’s noncompliance, FMS and IRS have a fiduciary duty to approve or disapprove waiver requests and effectively and promptly communicate final decisions. Prior to April 2002, IRS and FMS approved and disapproved waiver requests orally. To better coordinate their efforts, the agencies decided to develop a written joint process to assess waiver requests in April 2002. However, this process was never formalized in agency policies. IRS provided us a draft document of the joint waiver assessment process. According to the draft document, lockbox banks would submit a waiver request form to FMS. FMS would assess the waiver request and forward the request with FMS’s recommendations to an IRS waiver coordinator. The IRS waiver coordinator would disseminate the waiver requests to various units within IRS responsible for assessing the waiver requests. Once appropriate IRS officials had made their decisions and signed off on the waiver forms, the IRS waiver coordinator would forward the waiver form with IRS’s decision to FMS. FMS would notify the bank of the joint decision by returning the waiver request form with both agencies’ responses. The draft document indicates that the whole process should take about 6 days from FMS’s receipt of the waiver request form. According to our review of waiver requests made from April 16 through April 22, 2002, FMS forwarded to IRS its recommendations on most of the requests within 5 days of receiving them from lockbox banks. However, IRS took 5 months to officially sign half of the waivers. According to IRS officials, this delay resulted from a misunderstanding on the part of some IRS officials about whether IRS had to complete the waiver forms if IRS had already orally informed FMS of its decision. FMS postponed notifying the banks of a decision on their requests because it was waiting to receive a formal written decision from IRS for each request. However, 4 months after FMS received the requests for waivers and with no official decision from IRS, FMS decided to inform the lockbox sites of its unilateral disapproval of the banks’ waiver requests to mitigate the risk of loss to the government from lockbox sites not operating in compliance with the LPGs. Although most of the waiver requests were not related to critical requirements, one lockbox waiver pertained to a critical LPG security requirement. Under this requirement, temporary employees must provide photo identifications to the guards in exchange for badges allowing them access to the processing area. This control procedure was designed to validate the identity of individuals claiming to be employees before they enter the processing area. Bank management at one lockbox bank believed that its automated entry system provided adequate compensating controls against unauthorized access to tax data and receipts and that the manual verification of each employee’s identity was unnecessary. The bank therefore did not perform the procedure. FMS and IRS eventually denied the request, stating that the lockbox bank’s compensating controls did not provide sufficient protection against unauthorized entry. However, because of the breakdown in the joint waiver assessment process and the resulting delay in notifying the bank of the agencies’ decision, tax receipts and data were unnecessarily exposed to an increased risk of theft. Bank management had informally requested a waiver for the same issue in August 2001, before FMS and IRS established their formal waiver process in April 2002. IRS officials indicated that they had already orally informed the bank that its initial request for a waiver was denied and had also informed FMS during a meeting that the subsequent request submitted on the official waiver request form in April 2002 was denied. Therefore, IRS officials concluded that it was not necessary to review the April 2002 formal waiver request. FMS postponed notifying the banks of the decision because it was waiting for IRS’s decision in writing. After FMS decided to inform banks of its unilateral decisions on their formal waiver requests, it notified this site of its decision to deny the waiver request in mid-August 2002. As a result, the site operated in noncompliance with this security requirement for over 7 months, from January through August 2002. This period included the April 2002 peak period, when the bank operated in three shifts with as many as 300 employees per shift. IRS and FMS officials indicated that they are developing several memoranda of understanding between the two agencies to better coordinate oversight efforts, including the joint waiver assessment process. In response to a Treasury OIG recommendation, IRS and FMS began performing unannounced security reviews of lockbox bank sites. IRS’s IRM requires its Wage and Investment Division to conduct joint unannounced security reviews with FMS. However, IRS did not participate in the first three unannounced security reviews in 2002. The IRS official who would have participated in these security reviews had an extensive technical background in physical security, which would have been helpful in detecting physical security deficiencies. According to IRS officials, IRS did not participate in the reviews because the units responsible for performing various security reviews for both agencies were reorganizing during early 2002 and the agencies had failed to effectively communicate who would be the responsible parties to perform the unannounced security reviews. IRS officials indicated that they plan to include coordination of security reviews between the two agencies in a memorandum of understanding currently being developed. Effective oversight of lockbox banks requires appropriate follow-up on corrective actions taken to address deficiencies found during on-site reviews. As part of its follow-up procedures, FMS requires lockbox bank management to provide an official management response to reports on its unannounced reviews. In their response, bank managers are to indicate whether corrective actions have been taken or propose corrective actions and the dates by which the bank intends to implement them. Personnel from FMS’s FSD are responsible for obtaining management responses. However, FSD failed to obtain formal management responses for three unannounced security review reports issued in 2001 because FSD staff were preoccupied with the Pittsburgh lockbox incident and with implementing the 2002 lockbox network. Additionally, the tracking procedure in place to remind staff to obtain management responses was ineffective. As a result, FMS did not have sufficient information to assess the adequacy or timeliness of proposed corrective actions or corrective actions already taken. FMS and IRS found the same security problems identified by these reviews several months later. Although two of the three locations had closed after 2001, bank managers from one of the closed locations were transferred to manage a new lockbox location in the 2002 lockbox network. In a subsequent visit to this new location, FMS found the same security violation that it had found in 2001 at the closed site regarding the lack of a seeding program. Additionally, a subsequent visit by FMS and IRS to the location that continued to operate in 2002 revealed problems with malfunctioning perimeter door alarms that were similar to problems identified during the previous unannounced security review. The lack of effective controls to ensure that bank managers take corrective actions increases the risk that identified security weaknesses will not be corrected. According to FMS officials, the recently created Bank Review Office, whose staff have full-time responsibility for security oversight, is now responsible for following up on management responses to security reviews. Additionally, FMS officials indicated that they plan to create an automated tracking system to better track the status of management responses. The IRM specifically directs lockbox coordinators to perform on-site reviews of lockbox banks during peak processing periods. The scope of these on-site reviews includes assessments of the lockbox site’s compliance with critical processing and security requirements. Lockbox coordinators record the results of their reviews on forms specifically designed to show results of observations and sampling of transactions to determine the lockbox site’s compliance with specific security and processing requirements. However, for the January and April 2002 peak processing periods, lockbox coordinators did not adequately perform these reviews. Specifically, coordinators did not perform reviews for four lockbox sites for the January 2002 peak processing period. In addition, some of the information to be recorded on the forms used by the coordinators to document their reviews was not provided. For the April 2002 peak period, all the coordinators submitted the results of their reviews, but some of the reviews and the forms used to document them were only partially completed. IRS officials explained that the reviews could not be completed because several lockbox sites were new to the lockbox network during 2002 and therefore required full-time support from their lockbox coordinators. Because the reviews were not always completed, IRS may not have detected instances of noncompliance and therefore would not have been able to take immediate corrective actions. The guidelines for the reviews conducted by lockbox coordinators also need to be strengthened. To provide adequate oversight of lockboxes, lockbox coordinator reviews should include steps to assess critical controls and procedures for lockbox security and processing. In general, the lockbox coordinators’ reviews provide coverage for most of these areas. However, we found other critical controls and procedures not sufficiently covered by lockbox coordinator reviews because the guidelines and review procedures were either unclear or did not require these controls and procedures to be subject to review. These deficiencies could lead to IRS’s failure to detect significant instances of noncompliance. For example, while TIGTA found that couriers with criminal records at one lockbox location were given access to taxpayer data before their FBI fingerprint checks were completed, the lockbox coordinator’s review found the same location to be in compliance with courier requirements. We found that there is no specific step in the lockbox coordinator review procedures to determine whether all couriers given access to taxpayer data have completed favorable FBI fingerprint checks, as required by the LPGs. The review procedures only broadly ask whether the courier service is in compliance with the LPGs. Additionally, while the review procedures require coordinators to sample employee files to determine lockbox compliance with FBI fingerprinting requirements, there is no similar review step to determine lockbox compliance with FBI fingerprint checks for couriers and contractors. Lockbox coordinators we interviewed stated that during their on-site reviews, they did not verify whether couriers had completed favorable FBI fingerprint checks before they were given access to taxpayer data. As discussed later in this report, we also found problems with some lockbox sites’ controls to account for cash payments received from taxpayers and items found during candling. For example, the internal logs used by some lockbox sites to record cash received and items found during candling did not reconcile with the logs they submitted to IRS, raising questions about possible missing payments. Lockbox coordinators did not detect this problem because they are not required to and do not review internal lockbox logs for cash payments and candled items. IRS lockbox coordinators are responsible for the day-to-day administration of the lockbox program, including providing guidance and training to lockbox staff and ensuring that the lockbox banks promptly address production problems that may delay the deposit of funds to the Treasury. Lockbox coordinators are also responsible for performing on-site reviews to assess work quality and the bank’s compliance with security requirements. This creates a situation in which lockbox coordinators have multiple and conflicting responsibilities. When faced with competing demands, the likelihood increases that the day- to-day pressures of administering the lockbox program will take precedence over oversight responsibilities. For example, according to IRS officials, the lockbox coordinators’ failure to complete their peak period reviews resulted from competing demands. Lockbox coordinators are responsible for performing quality reviews and for assisting lockbox banks with processing issues. IRS Wage and Investment officials indicated that some of the lockbox coordinators could not complete their reviews because more pressing matters concerning lockbox operations, particularly at new lockbox sites, required most of their attention. The close working relationship with bank management that lockbox coordinators develop while helping bank management meet processing goals could also impair coordinators’ objectivity and independence when they are evaluating the bank’s compliance with LPG requirements. The Treasury OIG raised this issue in 1998 when it recommended that coordinators periodically rotate out of their coordinator positions to help maintain independence. However, IRS has not yet addressed this issue, citing its need to retain experienced lockbox coordinators to assist with the implementation of the 2002 lockbox network. IRS officials indicated that there are plans to reorganize positions of lockbox coordinators to address the independence issue. Until this issue is addressed, IRS has no independent overseers with sole responsibility for monitoring lockbox bank compliance during peak processing periods. The Comptroller General’s Standards for Internal Control in the Federal Government require that access to resources and records, such as IRS receipts and taxpayer data, be limited to authorized individuals to reduce the risk of unauthorized use or loss to the government. Lockbox banks, working as financial agents of the federal government, have a responsibility to protect taxpayer receipts and data entrusted to them. Tax receipts, such as cash and checks, are highly susceptible to theft, and unauthorized use of taxpayer data could result in identity theft and financial fraud. For example, from October 1, 2000, to April 30, 2002, TIGTA initiated investigations of theft of receipts valued at almost $2 million from the IRS lockbox network. It is therefore critical that lockbox banks implement a strong system of internal controls for the lockbox sites they operate. Prior audits by GAO and TIGTA have noted internal control weaknesses at lockbox sites. For example, in fiscal year 2000, we found that background screening for a temporary lockbox employee’s criminal history was limited to a local police records check and that some employees were given access to taxpayer data and receipts before completion of their background screening. We also found that lockbox couriers were subject to less stringent standards than IRS couriers. For example, lockbox couriers were not required to travel in pairs when transporting taxpayer data. We reported that these control weaknesses could lead to thefts of taxpayer receipts and data at lockbox banks. IRS and lockbox banks have implemented several additional safeguarding controls in response to audit findings and the 2001 incident at the Pittsburgh site. For example, IRS now requires lockbox employees to have obtained favorable results on their FBI fingerprint checks, which are nationwide checks of criminal records, before they receive access to tax data. IRS has also enhanced lockbox courier security guidelines. Nevertheless, during our recent visits to all nine lockbox locations, we found internal control deficiencies in the areas of (1) physical security, (2) processing controls, (3) courier security, and (4) employment screening. These control weaknesses, if uncorrected, could lead to significant losses to the government and taxpayers because they increase the risk of loss, theft, or mishandling of taxpayer receipts and data. Table 1 demonstrates the pervasiveness of the internal control issues found during calendar year 2002. These issues are discussed briefly below and are discussed in more detail in appendix I. Given the large volume and assembly-line nature of tax receipt processing, lockbox operations generally occur in areas with open floor plans, where taxpayer data and receipts are easily accessible to individuals on the processing floor. Thus, the vulnerability of sensitive tax data and receipts to theft or misuse is heightened. This vulnerability underscores the need for effective controls to deter and detect unauthorized access to taxpayer data and receipts. However, during our visit to lockbox locations, we observed numerous physical security breaches. Of the nine lockbox locations we visited, we found the following: At four locations, perimeter doors leading into processing areas were unlocked or door alarms did not function effectively. At two locations, guards were not responsive to alarms. At one location, employee identification was not adequately verified. At two locations, employment status of temporary employees was not adequately verified. At two locations, visitor access to and activity in processing areas were not adequately controlled. At six locations, guards did not closely monitor items brought into or out of the processing areas or prevent unauthorized items, such as personal belongings, in the processing area. At seven locations, camera surveillance needed to be improved. These security weaknesses increase the risk of theft and mishandling of taxpayer data and receipts and reduce the possibility of the timely detection of such incidents. In addition to physical security controls, lockbox banks are required to implement processing controls to maintain accountability for and security over transactions as they are processed in the normal course of operations. During our visits, we found deficiencies in processing controls designed to account for or protect tax data and receipts at most of the lockbox locations. Specifically, of the nine locations we visited, we found the following: At four locations, candling procedures were not adequate to minimize the risk of accidental destruction of tax data and receipts. At four locations, lockbox bank management did not reconcile candling logs to properly account for all items found during candling. At six locations, lockbox bank managers did not perform required reviews of the candling process or did not adequately document results of their reviews. At three locations, controls were not in place to adequately safeguard and account for cash receipts. At seven locations, returned refund checks were not adequately protected against theft. At one location, timely payments were incorrectly processed as delinquent. Inadequate accounting and safeguarding of tax payments and related vouchers or returns could lead to taxpayer burden, such as penalties and interest for failure to pay tax liabilities, if these items are accidentally destroyed, stolen, or incorrectly processed. Additionally, inadequate processing controls could result in errors not being detected promptly or additional work for IRS employees who must correct taxpayer accounts as a result of errors. Lockbox banks employ courier services to deliver (1) mail from post offices to lockbox processing sites, (2) processed checks to depository banks, and (3) tax data and unprocessable receipts to IRS Submission Processing Centers. During peak processing periods, couriers are entrusted with transporting thousands of pieces of mail each day that contain tax data, cash, checks, and deposits worth millions of dollars. Given the susceptibility of these items to theft and loss, it is important that they be carefully safeguarded while in transit to and from lockbox locations. However, our review and reviews conducted by TIGTA and IRS found several problems with courier security during calendar year 2002. These problems relate to failure to comply with courier security requirements as well as deficiencies in the current requirements. Specifically, at the nine locations, we and other reviewers found the following: At one location, couriers with criminal records were given access to tax data before bank management received results of their FBI fingerprint checks. At two locations, couriers were not properly identified. At two locations, courier vehicles containing tax data and receipts were not locked. At all locations, background screening requirements for lockbox couriers were less stringent than screening requirements for IRS couriers. At all locations, courier requirements did not prohibit unauthorized individuals in courier vehicles or require lockbox staff to inspect courier vehicles for unauthorized passengers. Until these courier security weaknesses are addressed, tax data and receipts in transit to and from lockbox locations are exposed to higher risks of theft and loss. Because lockbox employees are entrusted with handling sensitive taxpayer information and billions of dollars in receipts annually, ensuring worker integrity through a carefully managed recruiting and hiring process is an area that demands special attention from IRS, FMS, and lockbox management. However, during our review and those performed by TIGTA and IRS, we found that lockbox banks did not always comply with the FBI screening requirements and that further refinements to background investigation requirements for lockbox employees are needed. Specifically, for the nine locations, we and other reviewers found the following: At five locations, employees were given access to taxpayer data and receipts before bank management received results of their FBI fingerprint checks. At all locations, requirements for background investigations of lockbox permanent employees were unclear and resulted in variations in the scope and documentation of background investigations conducted on them. At all locations, FBI fingerprint checks for lockbox employees who are not U.S. citizens with lawful permanent residence status may not be adequate to disclose criminal histories for individuals who have only recently established residence in the United States. These weaknesses increase the risk of theft of tax data and receipts by individuals who may be unsuitable to work at IRS lockbox sites. IRS and FMS have not performed a comprehensive study evaluating the full range of costs and benefits of IRS processing tax receipts versus the lockbox banks processing the receipts for all types of tax receipts processed by the banks. Over the years, several studies have been performed evaluating the degree of interest float savings resulting from the use of lockbox banks. IRS and FMS jointly performed the most recent study in 1999 in response to a Treasury OIG recommendation. In the 1999 joint study, IRS and FMS considered the costs and benefits to the federal government of using lockbox banks rather than IRS to process Form 1040 tax receipts more quickly. Having adopted this perspective in accordance with Treasury regulations, IRS and FMS did not consider some costs, such as opportunity costs, or the results from alternative uses of the money spent to achieve speedier deposits. Foregoing speedier deposit of tax receipts and using the funds elsewhere could, according to recent IRS data, result in financial benefits to the government that could be greater, depending on the study assumptions used. In effect, IRS and FMS did not consider the implications of such alternatives available to IRS. In addition, the study did not clearly define the type of analysis done. Differing types of analysis would require consideration of differing costs and benefits and could result in different decisions than were made on the basis of the 1999 study. Due to changes in IRS, the lockbox bank network, and other factors, the 1999 study will not be useful to support IRS and FMS officials’ future decision about whether to continue the lockbox arrangement when the current agreements expire in 2007. IRS and FMS officials initially stated that they had not planned to conduct a new study before the lockbox agreements expire in 2007, but that a new study might be appropriate given many changes and the passage of time since 1999. In keeping with instructions in the Treasury Financial Manual (TFM), the purpose of the IRS/FMS study was to determine whether using lockbox banks or IRS to process most individual income tax receipts would minimize the total cost to the federal government. The TFM defines total cost to include agency costs, the cost of purchased services, and any loss of interest earnings to the government due to delays in depositing receipts. The study described three scenarios (described in app. II) in comparing estimated lockbox bank and IRS processing costs for fiscal years 2001 through 2007 based on different assumptions. The study used scenario I, the most conservative scenario, to support the decision to continue using lockbox banks. This scenario showed that IRS could spend about $56 million to internally process tax receipts, or could join with FMS to spend a total of $144.9 million to use lockbox banks to process tax receipts more quickly. The study concluded that the lockbox processing would be more costly but would save the government $100.5 million in interest that otherwise would be lost due to slower deposits if IRS processed the receipts. These interest savings more than offset the additional processing costs, producing net savings over 7 years of $11.6 million to the federal government from using lockbox banks rather than IRS to process tax receipts. Scenario I is used throughout this report in the cost comparisons shown, unless otherwise noted. Table 2 shows these cost comparisons. However, IRS and FMS did not consider various costs in these estimates. Most notably, they did not consider opportunity costs related to alternate uses of IRS funds to accelerate tax deposits. Each year, the Congress approves a budget for an agency and provides discretion, within certain constraints, on agency spending. Given resource constraints, IRS must effectively choose how to spend its fixed budget. IRS and FMS decided that it would be worth spending $4.4 million more out of IRS’s budget to use lockbox banks to process tax receipts compared to IRS’s slower process because net savings to the government would reach $11.6 million over the 7 years. IRS and FMS did not consider whether greater financial benefits could accrue to the government if IRS processed receipts and used the estimated extra $4.4 million from IRS’s budget to generate higher revenues through other programs or activities. Recent estimates by IRS’s Research Division have pointed to other activities—such as tax enforcement—in which spending the extra $4.4 million would have generated from at least $44 million to well over $100 million. For example, in some enforcement programs for just individual taxpayers, the ratio of estimated marginal tax revenue gained to additional spending was 13:1 to pursue known tax debts through phone calls, 32:1 to identify unreported income by computer matching tax returns and information returns, such as Forms W-2 and 1099, and 11:1 to audit tax returns through correspondence. Our reference to these alternate uses and ratios is for illustrative purposes. We did not analyze the basis for IRS’s estimates that produced these ratios. However, we have found over the years that IRS has had difficulty readily accumulating the costs of various activities because IRS lacks a cost accounting system. Despite these caveats, if these estimated ratios and scenario I estimates are approximately accurate, IRS and FMS might have made a different decision by considering opportunity costs. Although opportunity costs may be the most significant costs not considered, the study also excluded certain direct IRS and FMS costs, as discussed below. IRS and FMS costs to oversee lockbox bank processing: As discussed earlier in this report, these agencies use staff to oversee and review operations of lockbox banks. Costs to reduce the risk in processing tax receipts: This risk became evident during the 2001 filing season, with the incident at the Pittsburgh lockbox affecting about 78,000 taxpayer receipts. As of September 30, 2002, the government had lost an estimated $13.5 million in interest from missing tax receipts, and IRS has spent about $4.3 million to resolve problems with taxpayer accounts. In October 2002, Mellon Bank agreed to pay the government $18.1 million to cover administrative costs and expenses associated with the incident; however, these costs continue to grow as IRS is still resolving some issues. Costs to process different types of receipts: The study only considered Form 1040 tax receipts in response to a Treasury OIG report that questioned having lockbox banks rather than IRS process such tax receipts. Lockbox banks are also paid to process other types of tax receipts related to more than 10 other tax forms, such as Forms 1041 and 941. It is not known if these costs would be significant enough to change the study’s conclusions. It is also not known to what extent such costs would be offset by additional direct costs to IRS associated with returning the receipt processing function to IRS. IRS and FMS characterized the study at various times as a cost-benefit analysis and as a cost-effectiveness analysis. These types of studies would include different costs and benefits from those included in the 1999 study. If IRS and FMS had done a cost-benefit or cost-effectiveness study, the resulting conclusions may have differed. Being clear about the type of analyses being done and the limitations and uses of the related results helps decision makers to interpret and use those results. The 1999 study was neither a cost-benefit nor a cost-effectiveness analysis in economic terms. Cost-benefit and cost-effectiveness analyses are two tools that are commonly used to determine whether government investments or programs can be justified on economic principles. These tools also help to identify the best alternative from a range of competing investment alternatives. Economists commonly agree that, when either of these two analyses is used to evaluate federal programs that affect private citizens or other entities, the analysis should include costs and benefits to individuals and entities outside of the federal government. The IRS/FMS study was not a cost-benefit analysis because it did not consider costs or benefits to individuals or entities outside the federal government, such as taxpayers. By considering costs and benefits beyond the government, the study should address whether the benefits gained by the federal government (on behalf of society as a whole) outweigh the costs imposed on certain members of society, such as taxpayers. The lockbox program affects taxpayers and/or their banks by reducing their interest float benefits through quicker depositing of their tax receipts. The additional interest gained by the federal government through accelerated tax deposits comes with a similar loss of interest to taxpayers who mail in payments. The acceleration of deposits largely shifts who benefits. This shifting of interest benefits may have some value to society in terms of a more equitable sharing of the costs of government; however, it is difficult to put a dollar value on improved equity and that value is not necessarily equal to the dollar amount of interest that is transferred from taxpayers and/or banks to the government. Since these interest gains by the federal government made the use of lockbox banks beneficial, a different valuation of these gains could have resulted in a different decision about whether to contract with the banks. The study also is not a cost-effectiveness analysis because it did not compare program alternatives that had the same objectives. Instead, the alternatives of using lockbox banks or IRS staff to deposit tax receipts assumed that lockbox banks would achieve a faster speed of depositing receipts than IRS. In order to determine the cost-effectiveness of the lockbox program, the analysts would have had to compare the costs of that program to the costs of one or more IRS alternatives that would achieve the same speed of depositing. It is not clear that a cost-effectiveness study comparing like processing speeds would have yielded the same result as the 1999 study. The specific type of study that should be done to support a decision about whether to accelerate the deposit of tax receipts through lockbox banks is a matter of judgment. However, because the type of analysis that flows from the decision about the type of study to do can affect the study results, it is important that study leaders clearly define the type of analysis to be undertaken and why. Since 1999, many changes have occurred at IRS and the lockbox bank network that could affect processing and future cost comparisons. For example, IRS began moving to a new organizational structure in October 2000, which has changed where and how IRS processes certain types of tax returns and receipts. In addition, the network of lockbox banks (in terms of how many and which banks are involved) has changed. Starting in 2002, processing also included new security requirements, such as having FBI fingerprint checks done for each employee or contractor, to reduce the risks of thefts. Finally, changes in the 1999 study assumptions would be likely by 2007. Such assumptions involve the number of days of interest float; the number of tax receipts and number of Forms 1040 filed electronically; IRS labor, computer, and space costs; lockbox bank charges; and interest rates. For example, IRS and FMS assumed that lockbox banks would process tax receipts 1.384 days faster than IRS under scenario I based on a 1998 study by a contractor. However, IRS and FMS cannot be certain that any advantage that lockbox banks might have had in 1999 still exists due to the changes to IRS’s structure and the lockbox bank network. IRS and FMS officials stated that they had not planned to conduct a new study before the lockbox agreements expire in 2007. However, they have indicated that such a study may be warranted given many changes and the passage of time since 1999. Approximately $268 billion in tax receipts IRS collected in fiscal year 2002 were processed through lockbox banks. Given the significance of tax receipts processed by lockbox banks, effective oversight and sound internal controls at lockbox sites are critical to protect taxpayer data and receipts. The loss and destruction of tax receipts at the Pittsburgh lockbox site highlighted the need for increased scrutiny and oversight of these banks. Our review of the 1993 and 2002 lockbox contractual agreements revealed nothing inherent in their provisions that would necessarily encourage lockbox employees to mishandle taxpayer receipts. It is possible that in an effort to avoid penalties allowed under the agreements, such as financial penalties or contract termination, lockbox bank employees might make poor decisions about handling taxpayer receipts; however, these are important provisions designed to help the government address inadequate contractor performance. While FMS and IRS significantly improved their oversight of lockbox banks during 2002, oversight and internal control deficiencies impeded the effectiveness of this oversight in minimizing the risk to the federal government and taxpayers. These deficiencies need to be addressed to provide increased assurance that taxpayer data and receipts are properly safeguarded. IRS’s and FMS’s oversight of lockbox banks has not been fully effective primarily because their oversight roles and responsibilities have not been sufficiently defined and documented. Additionally, numerous internal control weaknesses need to be corrected and certain provisions of the lockbox processing guidelines need to be revised. Until these oversight and internal control deficiencies are addressed, the security of taxpayer data and receipts may be compromised. The most recent study done by IRS and FMS in 1999 followed Treasury regulations in considering only the costs and benefits to the federal government of achieving speedier tax deposits by using lockbox banks to process individual tax receipts (Form 1040). In doing so, the study did not consider other types of receipts processed by the lockbox banks or some relevant direct costs in comparing lockbox and IRS processing costs nor did it consider opportunity costs. Accounting for opportunity costs would be consistent with agencies’ responsibility to use budget funds economically and efficiently and should be considered regardless of the type of analysis that IRS and FMS undertake. However, the type of analysis affects other types of cost and benefit data that should be considered and therefore may affect the study results. IRS and FMS were not clear on the type of analysis done in 1999. Because certain data and assumptions from the 1999 study are now obsolete, IRS and FMS will need to conduct another study to determine whether to continue using lockbox banks when the agreement expires in 2007. To decrease the likelihood that further incidents involving the loss and destruction of taxpayer receipts and data will occur, we recommend that the Commissioner of FMS and Acting Commissioner of IRS thoroughly review the results of TIGTA’s investigation of the 2001 incident at the Pittsburgh lockbox site when it is completed and, if the results warrant, implement additional controls and modify the lockbox contractual agreements as appropriate. To improve the effectiveness of government oversight of lockbox banks, we recommend that the Commissioner of FMS and the Acting Commissioner of IRS document IRS’s and FMS’s oversight roles and responsibilities in agency policy and procedure manuals and determine the appropriate level of IRS and FMS oversight of lockbox sites throughout the year, particularly during peak processing periods; establish and document guidelines and procedures in IRS and FMS policy and procedure manuals for implementing the new penalty provision for lockbox banks to reimburse the government for direct costs incurred in correcting errors made by lockbox banks; finalize and document the recently developed waiver process in IRS and FMS policy and procedure manuals and ensure that decisions on requests for waivers are formally and promptly communicated to lockbox management; and establish and document a process in IRS and FMS policy and procedure manuals to ensure that lockbox bank management formally responds to IRS and FMS oversight findings and recommendations promptly and that corrective actions taken by lockbox bank management are appropriate. To improve the effectiveness of government oversight of lockbox banks, we also recommend that the Acting Commissioner of IRS establish and document a process in IRS policy and procedure manuals to ensure that IRS officials with the appropriate levels of expertise continue to participate in announced and unannounced security reviews of lockbox banks; ensure that the results of on-site compliance reviews are completed and promptly submitted to IRS’s National Office; revise the guidance used for compliance reviews so it requires reviewers to (1) determine whether lockbox contractors, such as couriers, have completed and obtained favorable results on IRS fingerprint checks and (2) obtain and review all relevant logs for cash payments and candled items to ensure that all payments are accounted for; and assign individuals, other than the lockbox coordinators, responsibility for completing on-site performance reviews. To improve internal controls at lockbox banks, we recommend that the Commissioner of FMS and the Acting Commissioner of IRS require that internal control deficiencies are corrected by lockbox bank management and that IRS and FMS take steps through ongoing monitoring to ensure that the following LPG requirements are routinely adhered to: perimeter doors are locked and alarms on perimeter doors are guards are responsive to alarms, employees’ identity and employment status are verified prior to granting access to the processing floor, visitor access to and activity in the processing area are adequately employee access and items brought into and out of processing areas are closely monitored by guards, surveillance cameras and monitors are installed in ways that allow for effective, real-time monitoring of lockbox operations, envelopes are properly candled, lockbox bank management perform and adequately document returned refund checks are restrictively endorsed immediately upon lockbox couriers are properly identified prior to granting them access to taxpayer data and receipts, and employees have received favorable results on fingerprint checks before they are granted access to taxpayer data and receipts. revise the lockbox processing guidelines to require that before lockbox bank couriers receive access to taxpayer data and receipts they undergo and receive favorable results on background investigations that are deemed appropriate by IRS and are consistent across lockbox banks, before permanent lockbox bank employees receive access to taxpayer data and receipts they undergo and receive favorable results on background investigations that are deemed appropriate by IRS and are consistent across lockbox banks, lockbox bank guards inspect courier vehicles for unauthorized passengers and unlocked doors, candling procedures for the various types of extraction methods be during candling, lockbox bank employees record which machines and which extraction clerks missed items, lockbox bank management reconcile items found during candling to lockbox bank management reconcile cash payments to internal cash logs and the cash logs they provide to IRS, and lockbox employees immediately seek processing guidance from the lockbox coordinator if envelopes with timely postmark dates are received after the postmark review period has ended. Because IRS and FMS must decide before 2007 whether to continue using lockbox banks to process tax receipts or to return that function to IRS, we recommend to the Secretary of the Treasury that a study be done in time (1) for its findings to be considered in the decision-making process and (2) to make any improvements to lockbox processing that the study indicates are necessary or to return the processing to IRS. Regardless of the type of analysis chosen, we recommend that the Secretary of the Treasury clearly define the type of analysis being done and why, and follow through to identify and analyze costs and benefits relevant to the type of analysis, consider the opportunity costs associated with the proposed investment in using lockbox banks to accelerate the deposit of tax receipts, and include the direct costs associated with oversight, risk reduction, and non-Form 1040 tax receipts. Treasury’s response to a draft of this report was jointly prepared by IRS and FMS. In responding to the report, IRS and FMS agreed with our findings and recommendations and stated that they have initiated or plan to initiate actions to implement our recommendations. IRS and FMS agreed to continually monitor lockbox banks’ adherence to internal controls and to modify the LPGs to improve consistency standards and clarify instructions. IRS and FMS also agreed to complete an analysis of lockbox processing prior to the expiration of the current lockbox agreements to determine how best to satisfy IRS’s remittance processing needs. In their response, IRS and FMS indicated many actions they have taken since October 2001 to improve lockbox operations. We identified many of these improvements during our review and documented them in our report, such as the establishment of the Bank Review Office within FMS and the development of memoranda of understanding concerning oversight roles and responsibilities. These actions and agreements will need to be promptly completed and thoroughly documented to satisfactorily address many of the issues raised in this report. The complete text of Treasury’s response is reprinted in appendix III. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations; Senate Committee on Governmental Affairs; Senate Committee on the Budget; Subcommittee on Treasury, General Government, and Civil Service, Senate Committee on Appropriations; Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; House Committee on Appropriations; House Committee on Ways and Means; House Committee on Government Reform; House Committee on the Budget; Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform; and Subcommittee on Oversight, House Committee on Ways and Means. We will also provide copies to the Chairman and Vice-Chairman of the Joint Committee on Taxation, the Secretary of the Treasury, the Commissioner of FMS, the Acting Commissioner of IRS, the Director of the Office of Management and Budget, the Chairman of the IRS Oversight Board, and other interested parties. Copies will be made available to others upon request. In addition, the report will be made available to others at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact Steve Sebastian at (202) 512-3406 ([email protected]) or Mike Brostek at (202) 512-9110 ([email protected]). Additional key contributors to this assignment are listed in appendix IV. During calendar year 2002, we visited all nine lockbox locations to review their internal controls designed to protect taxpayer data and receipts. TIGTA auditors and IRS reviewers also performed reviews of lockbox controls in 2002. Below are the specific internal control issues with (1) physical security, (2) processing controls, (3) courier security, and (4) employee screening, that we and other reviewers found at lockbox sites. The Lockbox Processing Guidelines (LPG) establish physical security and other processing requirements. Lockbox banks are required by their agreements with FMS to abide by these guidelines. However, during our visit to lockbox locations, we observed numerous physical security breaches in violation of the LPGs. We also identified areas where the LPGs could be strengthened. These matters are discussed in the following sections. To detect attempted breaches into secured space, lockbox guidelines require all perimeter doors leading into IRS lockbox space to be equipped with alarms. The guidelines also require guards to ensure that such doors are locked. However, at four lockbox locations, we noted problems with perimeter door security. At one location, we found a perimeter door unlocked. At another location, a perimeter door was not equipped with an audible alarm during operating hours. Bank management did not think an audible alarm was necessary because an additional interior door that was locked during the day controlled access to the processing area. However, we observed that this interior door was propped open during our visit. At this location, we also found that the alarm for another door was barely audible over the noise from the production floor and immediately ceased when the door was closed, limiting the opportunity for security personnel to determine which door opened and to investigate any possible unauthorized entrance or exit. At a third location, the alarms for one set of doors were not turned on during operating hours, and the alarm for another perimeter door failed to activate because, according to a bank official, the alarm had not been properly set. At a fourth location, the door alarms were not audible at the guard post because the guards had turned down the volume. These security weaknesses could result in unauthorized entry to and exit from the lockbox processing areas, increasing the risk of theft of tax data and receipts. Door alarms serve to alert lockbox staff to a possible breach of security. To be an effective physical security control, alarms require quick response time by a security force. However, at two of the lockbox locations where we found problems with perimeter door security, we also noted that security guards were not responsive to alarms. For example, at one location, we tested the alarm twice. On the first test, no guard responded to the alarm. On the second test, it took 1 minute after the door was opened before we saw a guard approaching the door. Bank managers believed that their guards should have responded faster. At another location, the perimeter door alarm sounded twice, and both times the guards did not respond. According to the bank manager, the guards’ post orders did not instruct guards to respond to alarms. He also added that it was difficult for the guards to hear the alarms from the guard station. The presence of security guards serves to detect unusual activities and to deter crime. An ineffective security force may not only limit the overall effectiveness of a security system but also may inadvertently encourage security breaches from individuals who decide to take advantage of this weakness. Lockbox locations are set up with a main entrance where guards can observe and control the traffic into and out of the processing area. The LPGs require that temporary employees provide photo identification to the guards in exchange for a badge allowing access to the processing floor. This ensures the identity of temporary employees is validated prior to entering the processing area. We found that at one location, guards did not routinely verify temporary employee identification when they entered the processing area. Bank management believed that manual, daily verification of temporary employee identification was not necessary because the location’s automated entry system (AES) provided sufficient controls to limit access to authorized individuals. The AES allowed entry into the building and processing floor to individuals with AES cards, which control the door and turnstile locks. AES cards are issued to temporary employees after the guards have verified the employees’ identities with valid photo identification cards on their first day of work. Once these employees are issued AES cards, guards no longer verify the employees’ identification before they enter the building and processing floor. Temporary employees wear handwritten name tags with no photo identification to easily verify their identity. As a result, if an unauthorized individual obtains an AES card, the lack of routine verification of employee identification increases the likelihood that an individual could gain unauthorized access to the building and the processing floor, thereby increasing the risk of theft of tax data and receipts. Effectively limiting access at lockbox locations to authorized individuals requires controls that not only verify the identity of employees, but also determine whether individuals who present themselves as employees are currently authorized to have access to the site. We found controls over verification of employment status to be ineffective at two locations. At one location, the controls implemented to validate an individual’s current employment were ineffective. The two-step manual process to identify temporary employees and check their current employment status before issuing access badges could have allowed unauthorized individuals to enter the processing area. The first step required temporary employees to obtain name tags from one station. Staff at this station checked the employees’ identification cards against a roster of current employees and issued them handwritten name tags. The second step required temporary employees to obtain access badges from the guard station. The guards issued access badges to anyone with a valid photo identification and handwritten name tag, making an assumption that those with name tags were current employees. As a result, an unauthorized individual could have circumvented these controls and gained access to the processing floor by making a name tag and presenting it to the guards. At another location, guards did not compare temporary employees’ identification cards to the temporary agency’s list of current employees until after the employees were given access to the processing area. This increases the risk that unauthorized individuals could gain access to taxpayer data and receipts and not be promptly detected. Anyone entering the lockbox processing area must wear an identification badge. In addition, individuals who have not had an FBI fingerprint check, but require access to the processing floor, must be escorted by a guard. However, at one lockbox location, we observed a copy machine repairman with no identification badge who was unescorted in the processing area. The guards indicated they were unaware whether the contractor had received an FBI fingerprint and therefore one of the guards had escorted the repairman into the processing area. However, during our observation, the guard was across the room, too far away to effectively observe what the repairman was doing. We also observed that none of the employees near the repairman challenged his presence. Bank managers later showed us proof that the repairman had successfully completed an FBI fingerprint check and explained that the guards should have given him a visitor badge, but did not need to escort him. Although in this case the contractor turned out to have an adequate security clearance, our observation indicated a need for guards to be better trained on procedures for granting access to contractors and for guards and employees to be more alert to activities on the processing floor. We also found malfunctioning AESs designed to control entrance into and exit from processing areas. Two lockbox locations use an AES to control access to the processing area. According to bank management for these two locations, the AES must register an exit for a specific badge before it can be used again to enter the processing area. In addition, the AES should not allow an individual to exit without a registered entry. In other words, in order for anyone to use an access badge to exit the processing area through the controlled access points, the system must first have a record of that badge as having been used to enter, otherwise exit from the area would be denied. However, due to a programming error, we found that the AES did not function as intended. Specifically, we found that a single visitor badge could be used repeatedly by different individuals, one after another, to gain access to the processing area, because the AES did not require a registered exit between registered entries. Moreover, we found that the badges allowed individuals to exit the processing area even though the AES did not initially record their entrance. This AES deficiency compromised the lockbox banks’ ability to control and monitor visitors’ entrance into and exit from the processing floors. At one location, managers corrected the error before we left; at the other location, managers agreed to correct the error. As a result of these weaknesses, tax receipts and data are exposed to an increased risk of theft from visitors who are not adequately monitored. The LPGs prohibit individuals from bringing personal belongings, such as purses and shopping bags, into processing areas. The guidelines also prohibit individuals from wearing baggy clothing or bulky outerwear inside processing areas. Guards stationed at the main entrance of processing areas enforce this rule. However, at five lockbox locations, we found that the guards did not effectively monitor individuals entering the processing floors to enforce this requirement. At three of these locations, we observed employees bring personal belongings or wear bulky outerwear inside the processing area. In one instance, a guard brought a purse inside the processing area. At four of these locations, we were able to bring personal belongings, such as purses, into the processing floor and were not challenged by the guards. The guards we interviewed informed us that they either failed to observe personal belongings brought into the processing area or did not know the requirements of the lockbox guidelines concerning personal belongings. Guards are also required to question individuals who attempt to remove property from the lockbox locations. However, at one location, the guards failed to search papers we carried out of the processing area. At another location, we found that on the day of our visit there were no guards present to observe employees leaving the processing area because bank management had not informed the guards that several employees would be working beyond their normal work hours. The guards finished their shift and left before the employees were dismissed. As a result of these security breaches, individuals could have used personal belongings and bulky clothing to conceal and remove tax data and receipts from lockbox locations undetected. The LPGs require surveillance cameras to be installed at lockbox locations and security guards to monitor the cameras to observe critical areas, such as the loading docks, secure storage areas, mailroom, and extraction area. However, we found that the camera surveillance at seven lockbox locations we visited could be improved. At two locations, the cameras were stationary and did not have zoom capability to effectively monitor critical areas. At another three locations, camera monitors to survey activities on the processing floor were located in the managers’ offices; however, because of other duties, the managers were frequently on the processing floor and were not able to observe the monitors. FMS visited one of these three locations in early 2002 to perform an unannounced security review and also reported this as a finding. Additionally, at another of these three locations, there were no surveillance cameras in the former administrative offices located within the processing area. Lockbox managers recently vacated this area and did not consider installing surveillance cameras because no processing activity occurred there. However, this area housed unused file cabinets and desks with drawers where tax data and checks could be hidden. One of the offices was also used by a computer contractor while on site. At the two other locations, we found that the monitors used to display the images from surveillance cameras were ineffective. Specifically, the monitors displayed up to 16 images on one screen making each individual image barely visible to effectively monitor or detect illegal activities. Additionally, since these monitors were visible to employees and visitors, they were ineffective deterrent controls to those who noticed that the surveillance could not effectively monitor activities on the processing floor. The images would be more effective in deterring criminal activities if they were more visible. As a result of these weaknesses, lockbox camera surveillance was not capable of consistently and effectively detecting unusual activity or unsafe practices and providing early warning of possible security compromises. In addition to physical security controls, the lockbox guidelines also provide requirements for processing controls to maintain accountability for, and security over, transactions as they are processed through the normal course of operations. During our visits to lockbox locations, we found deficiencies in processing controls designed to account for or protect tax data and receipts at several of the lockbox sites. Specifically, taxpayer information and receipts particularly vulnerable to theft, such as cash, were not carefully processed and safeguarded. Moreover, the records to account for items found during candling, a process to detect overlooked items remaining in envelopes, and cash payments were inadequate to allow lockbox managers to easily and promptly detect lost or stolen items. To prevent the accidental destruction of taxpayer data and receipts, lockbox guidelines require envelopes to undergo a candling process. The LPGs also require lockbox site managers to periodically review the effectiveness of their site’s candling procedures. During our review of lockbox operations, we found that at some lockbox locations (1) deficiencies in the candling processes may lead to the accidental destruction of tax data and receipts, (2) accounting for found items was insufficient to detect missing checks within a reasonable time, and (3) management review of candling processes was either lacking or not clearly documented. Inadequate Candling Procedures Could Lead to Accidental Destruction of Tax Data and Receipts Lockbox staff open envelopes manually by hand or with the assistance of a mail extraction machine. Some mail extraction machines slit envelopes on one side, allow employees to extract their contents, and have the capability to electronically detect overlooked items remaining in the envelopes. More advanced machines slit envelopes on three sides and extract the contents. The method used to open the envelopes determines how the envelopes should be candled. The LPGs provide guidance on candling procedures and require all envelopes to be candled twice before destruction. Envelopes opened with the assistance of a mail extraction machine are considered to have gone through the first candling. After processing, employees are required to review each envelope through a source of light, such as a candling table, to determine if any contents remain in the envelope. This is considered the second candling. For manually opened envelopes, envelopes that are slit on three sides and flattened sufficiently meet candling requirements without further light source viewing. Envelopes that are manually slit on only one side are reviewed twice on the candling table. During our visits to lockbox locations, we found that at four locations, envelopes were not sufficiently candled to prevent the accidental destruction of tax data and receipts because of malfunctioning machines, careless candling practices, and ineffective candling guidelines. At one location, we found two mail extraction machines that malfunctioned and failed to detect checks remaining in the envelopes. Additionally, some envelopes were only candled once because employees often used the mail extraction machines as desks. Employees manually opened the envelopes, completely bypassing the first candling step to be performed by the machines. These envelopes were then candled only once on the candling table. We also observed that employees were inattentive when candling envelopes on the candling table, allowing envelopes to overlap and making it difficult to fully illuminate each envelope or all parts of an envelope. At a second location, envelopes that were manually opened were not slit on three sides or candled twice. Two other locations did not properly candle all envelopes because the candling requirements in the LPGs do not specify procedures to be used with more advanced extraction machines that slit the envelopes on three sides and extract the contents. Management at these locations believed that because the envelopes were opened on three sides, they met the candling guidelines for manually opened envelopes. Therefore, they concluded that no further candling was required. However, an IRS official subsequently explained that envelopes opened by these types of machines should be subject to a second candling until IRS performs a study to determine their effectiveness. The fact that we found a $3,300 check that had not been detected by one of these advanced machines, located in another site, indicates that they can also malfunction and result in the destruction of taxpayer data and receipts. As a result, candling procedures did not effectively reduce the risk of accidental destruction of tax receipts and data. Accounting for Items Found During Candling Was Insufficient Lockbox guidelines require lockbox employees to account for each item found during candling. Some lockbox locations use two forms to record items found--an internal candling log and a Form 9535 or equivalent, which is required to be submitted to the lockbox coordinator each month for IRS review. Employees prepare the internal log while candling and later transfer the data onto the Form 9535. Since all items found during candling must be reported to IRS, the Form 9535 should record the same number and amounts of checks found as noted on the internal log being maintained as processing is occurring. However, the lockbox guidelines do not require a reconciliation of one set of records to the other, nor do they require a reconciliation of items found during candling to their candling records. During our review, we found that lockbox banks did not always have procedures to ensure that all items found were accurately recorded on both sets of logs and that bank managers could not properly account for all items found. At one location, the two sets of logs that bank management provided to us could not be reconciled. For example, a check was recorded on the internal log but not on the Form 9535. Management could not provide an adequate explanation or documentation to explain the discrepancy. At this location, we also noted delays of up to 6 days in transferring records of items found from the internal log to the Form 9535. Lockbox management explained that because of the volume of work during the April peak, the Form 9535 could not always be completed or the checks could not always be processed the same day they were found during candling. Another location’s internal log only recorded tallies of the quantity of items found, which did not match the number of items listed on the log provided to IRS. Because there was not enough information on the internal log, we could not determine whether items, such as checks, recorded on the internal log but not on the Form 9535, were ever processed and credited to the taxpayers’ accounts. We identified this same issue during a visit to this location in October 2001 as part of our audit of IRS’s fiscal year 2001 financial statements. After this visit, lockbox management and IRS agreed to address this problem. However, as of April 2002, this problem had not been corrected. IRS indicated that it plans to address this issue in the January 2003 revisions to LPGs. At two other locations, we also found that no reconciliation occurred between items found during candling and the candling log. As a result of these weaknesses, lockbox management and IRS may not promptly detect missing checks. We also noted that the lockbox guidelines do not specifically require the banks to determine which machine or which extraction clerk missed items that were subsequently found during candling. Such information would help lockbox bank managers promptly track and repair malfunctioning machines and provide performance feedback to extraction clerks. Management Review of Candling Process Was Insufficient or Not Adequately Documented Because IRS officials are not at lockbox locations daily, IRS relies on lockbox managers to ensure that their staff complies with lockbox guidelines. In the case of candling requirements, IRS guidelines require lockbox managers to sample candled items to determine the effectiveness of the candling process and to report results of the reviews to IRS. However, at four locations, we found that lockbox managers, who indicated they had performed these reviews, failed to document or clearly document the results. At two additional locations, managers stated that they did not sample candled envelopes. The manager at one of these locations believed that her frequent observation of the candling process was sufficient to ensure that the envelopes were properly candled. Given the problems we observed with the candling process at several of the locations we visited, management reviews and proper documentation of such reviews are critical to help ensure that problems are promptly identified and corrected. Lockbox sites receive tax receipts in the form of cash as well as checks. Cash receipts are highly susceptible to theft, and lockbox guidelines have specific requirements for safeguarding and accounting for cash receipts. The guidelines require cash to be stored in locked containers to prevent theft. The keys to these containers must not be left in unsecured places, such as desk drawers. The guidelines also require cash receipts to be recorded on a log. At three lockbox locations, we noted weaknesses in the safeguarding and accounting for cash. At one location, the locked cash box was stored in a locked drawer. However, the keys to both the cash box and the locked drawer were stored in an open drawer because bank management wanted supervisors to have quick access to the cash box to expedite the deposit of cash. At another location, cash for the business and individual tax payments were stored in two separate safes. Three staff members had keys to the safes, which could be opened by individuals acting alone. Additionally, the safe used for business payments was small and moveable. The individual access to the safes and the ease of mobility of one of the safes compromised the security of the stored cash. At two locations, we noted discrepancies between the bank’s safe log and the log that the lockbox managers were required to provide to IRS. A safe log is an internal lockbox document that employees complete before they place cash in the safe or cash box. The safe log should identify the taxpayer, the amount paid, and date cash was discovered. When we reviewed the two sets of logs we found that the dates, amounts, item counts, and taxpayer identification information in the logs did not agree, and bank managers could not reconcile some of the discrepancies. For example, at one location, five items on the safe log were not on the IRS log. One of the items was a $140 cash receipt. Because the safe log did not record taxpayer information for any of the receipts, we could not verify whether the receipt was ever posted to the taxpayer’s account. There is currently no requirement in the LPGs for bank managers to reconcile their internal cash logs to the cash log sent to IRS. Such reconciliation could have quickly detected the discrepancies we noted. Bank management attributed some of these discrepancies to human error, inexperienced staff, and staff failure to take the accounting for cash seriously because the amounts found are typically small. However, individual taxpayers have made cash payments totaling hundreds of dollars. Failure to properly secure and account for cash receipts could result in theft, the untimely detection of theft, inaccurate posting of tax receipts, and unnecessary burden on taxpayers whose cash receipts are lost or incorrectly posted. Therefore, it is critical for lockbox banks to diligently safeguard and account for cash receipts. Lockbox banks sometimes receive federal tax refund checks that have been returned by taxpayers as payment against other tax liabilities. Some of these checks have been endorsed by taxpayers and are therefore negotiable. As a safeguard against theft and misuse of returned refund checks, lockbox guidelines require lockbox extraction clerks to promptly stamp a restrictive endorsement on returned refund checks. These checks are subsequently forwarded to IRS for further processing. At seven locations, however, we observed that returned refund checks were not always stamped upon extraction and at some locations were set aside, unsecured, to be stamped later by a different individual. At two locations, we found the returned refund checks without the required stamps ready to be shipped to IRS. Lockbox management attributed many of these exceptions to staff oversight or inadequate training. Several years ago, IRS investigators discovered the theft of seven returned refund checks totaling $300,000 at one IRS Submission Processing Center. Such thefts can also occur at lockbox sites. Thus, it is critical that restrictive endorsements be placed on returned refund checks as soon as they are extracted. The LPGs, as currently written, have resulted in timely tax payments being processed as delinquent. To determine timeliness, lockbox employees are required to examine the postmarks on envelopes in which tax receipts were received. If the postmark indicates that the receipt is delinquent (i.e., the postmark is subsequent to the return or payment due date), the receipt should be processed as delinquent and the envelope should be attached to the corresponding document and forwarded to IRS. If lockbox employees determine that the receipt is timely (i.e., the postmark is prior to the return or payment due date), the envelope need not be retained. However, beginning with the first day of the month following the month the payment is due, lockbox guidelines require lockbox employees to use the date the mail is received as the transaction date to be recorded in the taxpayer’s account as the payment date. Furthermore, the lockbox guidelines do not require lockbox employees to review and retain the envelopes in which the tax receipts were enclosed. At one lockbox location we visited, we observed that on May 1, 2002, lockbox employees received and extracted tax receipts from two envelopes postmarked prior to or on the April 15 payment due date. Because it was already past the period during which lockbox employees were required to examine postmarks and retain envelopes, lockbox employees processed the payments as delinquent and would have discarded the envelopes even though they were aware that the envelopes were postmarked on or before the payment due date. When we brought these two transactions to the lockbox coordinator’s attention, the lockbox coordinator concluded that, in these instances, the taxpayers had made timely payments. The lockbox coordinator subsequently adjusted the taxpayers’ accounts to reflect these as timely payments. We recognize that careful examination of postmarks on envelopes for all receipts received after the payment due dates slows down the payment processing and we recognize the need to establish reasonable cut-off dates for this review process. However, lockbox employees should immediately seek processing guidance from the lockbox coordinator when incidents such as the ones noted above come to their attention to avoid burdening taxpayers with erroneous penalties and interest for late payments. Additionally, as written, the LPGs could lead to potential taxpayer burden and unnecessary costs to IRS associated with correcting the status of taxpayer’s accounts. Lockbox banks entrust courier employees with transporting thousands of pieces of mail containing tax data, cash, checks, and deposits worth millions of dollars per day. Given the susceptibility of these items to theft and loss, it is extremely important that they be carefully safeguarded while in transit to and from lockbox sites. We previously reported on weaknesses in lockbox courier security and noted that lockbox courier guidelines were not as stringent as IRS courier guidelines. For example, unlike IRS courier requirements, the LPGs did not require courier employees to pass a limited background investigation, thus increasing the risk of theft of tax data and receipts by couriers with past criminal histories. The LPGs also did not require lockbox couriers to be insured for $1 million and to travel in pairs while transporting IRS data. In fact, in past audits, we found that lockbox banks used only one courier employee to transport IRS data. These weaknesses in courier security increased the risk of theft and loss of taxpayer data and receipts. In response to our audit findings, IRS enhanced the lockbox courier requirements. The 2002 LPGs now require that couriers used by lockboxes pass favorable FBI fingerprint checks, be bonded for $1 million, travel in pairs, transport IRS data from the lockbox facility to its destination with no stops in between, provide dedicated service to IRS, and lock and attend courier vehicles while IRS data are contained within the vehicles. Despite these enhancements, we and other auditors continue to find weaknesses in lockbox courier security because of the lockbox sites’ failure to consistently comply with the revised guidelines. In addition, lockbox courier guidelines could be further refined to improve the security of tax data and receipts. IRS recently revised the background screening requirements for lockbox couriers. The revised LPGs, effective January 2002, prohibit couriers from having access to IRS data until lockbox managers have received results of their FBI fingerprint checks and resolved any questionable fingerprint results. However, during a recent TIGTA audit of one lockbox site, auditors found that three couriers were allowed access to taxpayer information before the lockbox received the results of their fingerprint checks. Lockbox managers subsequently received results of the FBI fingerprint checks, which indicated that two of these couriers had criminal histories. Nevertheless, TIGTA found that lockbox management continued to allow the two couriers and an additional courier, whose FBI fingerprint check also indicated a criminal history, access to taxpayer data while follow-up investigations, which subsequently cleared them, were underway. TIGTA auditors attributed this weakness to lockbox management’s failure to develop and implement procedures to ensure that couriers are granted proper clearance before they receive access to IRS data. Lockbox courier standards require courier employees to wear identification badges and lockbox banks to implement procedures to properly identify couriers. However, at two lockbox locations, we found that couriers did not wear their identification badges. At one of these locations, lockbox employees did not verify the courier’s identification before entrusting him with taxpayer data because the employees indicated that they were familiar with the courier. Additionally, at this location, the courier access list, which lists couriers authorized to access tax data and photo identification of couriers, was maintained at the guard station and not easily accessible to employees who must verify couriers’ identities daily. Although not a requirement in the LPGs, some locations have posted the courier access lists by loading dock doors to facilitate the identification of couriers. Unless lockbox employees diligently perform their duties to properly identify couriers, tax receipts and data are exposed to higher risk of theft from former couriers who have recently been terminated or unauthorized individuals posing as couriers. The LPGs require courier vehicles to be locked whenever IRS data are contained in the vehicle until it reaches its destination. Additionally, the vehicle must always be under the supervision of one of the couriers and never left unattended. At one lockbox site, we observed that couriers did not lock the courier vehicle containing tax data. The couriers stated that they generally did not lock the doors because they never left the IRS packages unattended. The lockbox guard did not check to see if the vehicle was locked because there is no requirement to do so. IRS reviewers also observed couriers failing to lock courier vehicles during their review of another lockbox location in April 2002. Failure to ensure that courier vehicles are locked while in possession of taxpayer data and receipts increases the risk of loss of such items. We also found that background screening requirements continue to be less stringent for lockbox couriers than for IRS couriers. IRS couriers are subject to both an FBI fingerprint check and a basic background investigation for contractors before they are given access to tax receipts. This background investigation includes a check of other federal agency and defense clearance investigation databases for results of previous background investigations and a check for any outstanding tax liabilities. In contrast, lockbox couriers are only required to favorably clear an FBI fingerprint check and are subject to no further background investigations. As a result, IRS may not discover information, such as outstanding tax liabilities, that might cause IRS to deny them access to taxpayer data. IRS is aware of the risks associated with lockbox couriers and is considering enhancing the lockbox guidelines to require lockbox couriers to undergo basic background investigations similar to those required of IRS couriers. IRS courier standards specifically prohibit the presence of unauthorized individuals in courier vehicles and require IRS personnel to inspect courier vehicles daily to ensure that no unauthorized passengers are in the vehicle. The LPGs, on the other hand, contain no prohibition of unauthorized individuals in courier vehicles and do not require lockbox staff or guards to inspect courier vehicles for unauthorized passengers. The guidelines state only that IRS reserves the right to inspect courier vehicles and drivers. Because IRS representatives are not on-site every day and there is no requirement for lockbox employees to inspect vehicles, unauthorized individuals could ride in courier vehicles and have access to taxpayer data and receipts without lockbox management’s knowledge. Because lockbox employees are entrusted with handling sensitive taxpayer information and billions of dollars in receipts annually, ensuring worker integrity through a carefully managed recruiting and hiring process is an area that demands special attention from IRS, FMS, and lockbox management. We previously reported that the screening of permanent and temporary lockbox employees was inadequate and untimely. Specifically, instead of referring to a national database to check for criminal records, lockbox banks limited the screening of criminal background investigations for temporary employees to police records checks in counties that individuals voluntarily disclosed as prior residences. Therefore, the police records checks may be incomplete for some individuals who chose not to disclose counties in which they committed a crime and have criminal records. In addition, lockbox permanent employees were allowed to handle cash, checks, and taxpayer data before their fingerprint checks were completed. IRS management has been responsive to our recommendations and has enhanced its policy on screening of permanent and temporary lockbox employees. The LPGs now require permanent and temporary employees to undergo FBI fingerprint checks. In contrast to the previous police records checks performed by county, FBI fingerprint checks are national in scope. An individual’s fingerprints are matched against fingerprints maintained in the FBI’s national database of criminal records. As a result, criminal records checks performed for lockbox applicants are no longer dependent on the applicant to accurately and completely disclose prior residences. The guidelines have also been updated to prohibit access to taxpayer data and receipts until lockbox management receives the results of an individual’s fingerprint checks. Results that show a possible criminal history must be resolved before the individual in question is allowed access to the lockbox site. The guidelines also require permanent lockbox employees to undergo an appropriate background investigation, as determined by an IRS security officer, in addition to an FBI fingerprint check. Despite these policy improvements, we found that lockbox banks did not always comply with the FBI fingerprint requirements and that further refinements are needed regarding background investigation requirements for lockbox employees. Based on our review of lockbox personnel files, we found that lockbox banks are generally complying with the new guidelines. However, we and IRS reviewers nonetheless found instances of noncompliance at lockbox banks. This shows a need for IRS and FMS to ensure that lockbox management clearly understand the screening requirements and have implemented effective controls to prevent permanent and temporary employees’ access to tax data until they have favorably completed an FBI fingerprint check. Specifically at three lockbox locations, we noted noncompliance related to the screening of permanent staff. At one location, we found that 4 out of a nonrepresentative selection of 25 permanent employees whose personnel files we reviewed began working at the lockbox location before bank management received their fingerprint check results. Bank management and IRS personnel explained that this situation occurred because IRS had verbally waived compliance with the screening requirements due to the fact that the bank was experiencing delays in obtaining timely responses on fingerprint checks at the beginning of 2002. At the second location, weak controls to ensure that all employees successfully complete FBI fingerprint checks allowed a permanent bank employee to process receipts and taxpayer data for 3 months before lockbox managers discovered that the employee had not undergone a fingerprint check. The employee was removed from the processing floor until the fingerprint check was completed and approved. At the third location, a permanent employee was allowed to work for several days before the FBI fingerprint check was completed because bank management misunderstood the fingerprint check requirement for lockbox employees. During its April 2002 peak review, IRS officials found similar problems at two other lockbox locations. One location allowed a temporary employee access to tax data before the completion of the employee’s fingerprint check. At the other location, the temporary agencies listed 12 employees as eligible to work, of which 6 were already working, even though they had not received their FBI clearance checks or were denied clearance to access tax data. As discussed earlier, taxpayer information and receipts are easily accessible to anyone on the processing floor. Therefore, it is critical for lockbox banks to ensure that these items are properly safeguarded by diligently complying with all aspects of the LPGs, which include screening lockbox employees. Late screening of lockbox employees could result in theft or loss in instances where bank management unknowingly allows individuals with criminal backgrounds access to IRS data and receipts. The current LPGs require permanent lockbox employees to favorably complete an FBI fingerprint check and an appropriate background investigation, as determined by an IRS personnel security officer. However, the LPGs do not define what is considered an appropriate background investigation for permanent lockbox employees and what information regarding the results of the background investigation should be provided to IRS. As a result, the scope of and documentation for background investigations performed on permanent lockbox employees varies greatly among lockbox banks and is not consistent with background investigations required of other IRS contractors. According to IRS officials, while some lockbox banks subject their permanent employees to the required FBI fingerprint check and additional background investigation, such as county criminal records and credit history checks, other lockbox banks subject their permanent employees to FBI fingerprint checks with no further background screening. Additionally, IRS officials found that investigation results they receive from banks do not provide adequate information to determine whether the individual should be allowed access to taxpayer data. For example, background investigation results may indicate that a criminal records check was completed but not whether any arrests or convictions were found. Other results of background investigations may indicate that an arrest or conviction was found during a criminal records check but not the basis for the arrest or how recently or frequently the offense occurred. IRS officials also explained that some lockbox banks could not provide documentation of results of background investigations performed on their permanent employees because, as a result of recent bank mergers, lockbox management did not have access to those records. According to IRS officials, IRS did not foresee the problems with background investigations for permanent lockbox employees. As the variance in the scope of background investigations and in the adequacy of documentation of their results became evident, IRS made a decision to accept favorable results of FBI fingerprint checks as the minimum criterion for allowing permanent lockbox employees access to taxpayer data. As a result, the level of background screening performed on permanent employees is inconsistent among lockbox banks and with requirements for other IRS contractors, such as IRS contracted couriers whose backgrounds are checked against other investigation databases and for tax liabilities, as previously discussed. Additionally, permanent employees who were granted access to tax data based only on the results of favorable FBI fingerprint checks, in effect, received the same level of background screening as temporary employees and less than that of IRS contracted couriers, even though permanent employees have more influence and authority over lockbox operations, and are granted more access rights to various sections of the lockbox sites. Because some banks subject their permanent employees to less scrutiny when performing background investigations than other banks, IRS may not be aware of critical information that could have been uncovered by a more thorough background investigation, such as recent criminal records not yet reported to the FBI, which might cause IRS to deny them access to taxpayer data. TIGTA auditors recently reported that lockbox banks often employ non- U.S. citizens with lawful permanent residence to process IRS tax payments. Although the IRS hires only U.S. citizens, IRS and FMS have allowed lockbox banks to hire non-U.S. citizens. TIGTA auditors found this policy to be consistent with guidelines from the Department of the Treasury regarding the hiring of contract employees. However, this hiring practice may pose unnecessary risks to IRS materials because the FBI fingerprint check, which is national in scope, may have very little information to disclose if these individuals lived in this country for only a short period of time. The Department of State and the Immigration and Naturalization Service perform some background checks before issuing visas to nonresidents or upgrading visas that may allow individuals to achieve lawful permanent resident status. However, neither the TIGTA auditors nor the IRS know the extent of these background checks. In response to this finding, IRS agreed to form a task group to review the current standards. If IRS determines that the standards do not provide adequate protection or the risk is not reduced by other security measures, IRS will incorporate more stringent requirements into the LPGs after coordinating with FMS and the Department of the Treasury. The uncertainty of criminal histories of non-U.S. citizens hired by lockbox banks may lead to hiring of individuals with criminal histories which, in turn, increases the risk of theft of receipts or misuse of tax data. For instance, TIGTA auditors reported that evidence regarding the theft of checks from one lockbox site indicates the involvement of a crime ring from a foreign country in the negotiation of and possibly in the actual theft of taxpayer checks. In August 1999, an IRS/FMS taskforce issued a study entitled 1040 Tax Payment Comparative Cost Benefit Study. The study estimated the costs and interest savings from processing Form 1040 tax receipts at IRS compared to lockbox banks using three different IRS scenarios. Table 3 shows the IRS/FMS taskforce results for all three IRS scenarios for each of 7 fiscal years (fiscal years 2001 through 2007) and overall. All three IRS scenarios used the same estimated lockbox bank processing costs of $144.9 million. IRS interest float savings and processing costs varied because assumptions differed across scenarios. Scenario I estimated a 1.384 interest float savings while scenarios II and III used 3 days and 10.6 days, respectively. Processing costs for scenario I assumed additional processing equipment, additional staff, additional space, and a 10 percent increase in processing productivity. Scenario II assumptions were the same as I except for assuming no increase in processing productivity. Scenario III assumptions were the same as II except for assuming no additional processing equipment, staff, or space. We focused on IRS scenario I for further analysis because it was the one used to justify the decision to continue using lockbox banks to process tax receipts. Table 4 shows the IRS/FMS taskforce results for scenario I in more detail for each of 7 fiscal years (2001 through 2007). We analyzed the documented support for the data used to develop estimates in the study. The support often came from historical data on lockbox banks and IRS’s processing. We generally found some documented support on the methodology and assumptions used for the costs and revenue estimates. We could not compare support for the specific cost estimates, however, because the lockbox banks only had a basic charge for processing tax receipts and an additional charge to sort tax returns and ship them to IRS. The estimation methodology and assumptions used in the study were the same for each year. To illustrate the methodology and assumptions, we reviewed how the estimates were developed for the first year—fiscal year 2001. For example, as shown in table 4, the net saving of $1.2 million is the difference between IRS and lockbox bank costs and IRS and lockbox bank interest savings. For costs, the study estimated that processing the tax receipts through lockbox banks would cost $11.3 million more than processing them through IRS. For interest savings, the study estimated that using lockbox banks would save $12.5 million more than using IRS. For cost estimates, a key factor was the estimated number of tax receipts, which was based on the actual number of 1998 tax receipts and projected for future years using expected growth rates. To understand IRS’s costs for fiscal year 2001, we analyzed its four components—labor costs, basic support costs, equipment depreciation, and site preparation depreciation. A discussion of each of the four cost components follows. IRS labor costs were often estimated from IRS’s Cost Estimate Reference guide that provides estimated costs for particular activities at IRS. The guide includes estimated labor cost and staff hours for processing tax returns. We traced each cost estimate in the study to the IRS cost guide. We also discussed the cost estimates with the IRS analyst who made and documented the computations for the study. Table 5 breaks down the IRS labor cost estimate for fiscal year 2001. Explanations of the other three components in IRS’s processing cost estimates follow. Basic support cost: $291,679 Consists of service and supplies, equipment, and printing on the basis of rates listed in the IRS cost guide. Equipment depreciation: $1,317,796 IRS would need to spend an estimated $6,588,980 on hardware, furniture, and software if IRS processed Form 1040 tax receipts instead of lockbox banks. This cost was depreciated over a 5-year period in equal annual amounts. Site preparation depreciation: $120,000 IRS would need to spend $600,000—$300,000 at each of two IRS locations—to prepare space to accommodate new equipment required to process the increased volume of tax receipts. This cost was depreciated over a 5-year period in equal annual amounts. We also analyzed the added interest savings if lockbox banks processed the tax receipts instead of IRS. The IRS/FMS taskforce study followed a formula in Treasury regulations to compute this estimate. For fiscal year 2001, the factors in that formula included total tax receipts = $45,224,421,259 divided by total deposit days of 250 multiplied by interest float of 1.384 days multiplied by an estimated federal funds rate of 5 percent. The number of deposit days was specified in the Treasury regulations. The interest float represents how much faster lockbox banks could process tax receipts compared to IRS in three areas, totaling 1.384 days: Compressing the program completion date (PCD) 1.000 day Mail float is measured from the time a taxpayer mails a payment until it arrives at a lockbox bank or IRS. Availability float is measured from the time a receipt is deposited until the funds are credited to the Treasury. The PCD is the day when lockbox banks must finish processing during peak workload periods and return to a schedule of depositing receipts within 24 hours. We examined the basis for each of these three estimates. Mail and availability float figures were taken from a July 1998 interest float study done by a contractor for FMS. The PCD figure came from an agreement by lockbox banks to compress PCD by 1 day while the study concluded that IRS could not match the compression for a number of reasons. A new interest float study would have to be done to know the actual float advantage, if any, from using lockbox banks rather than IRS to process the tax receipts. The following are GAO’s comments on the Department of the Treasury’s letter dated December 20, 2002. 1. See “Agency Comments and Our Evaluation” section. 2. IRS and FMS indicated the need for one technical clarification regarding our use of the terms “contracts,” “contractor,” and “contractual agreements” with respect to lockbox banks and recommended that we delete all references to “contracts” and “contractors.” IRS and FMS stated that when lockbox banks perform services for IRS, they act in a financial agent capacity on behalf of Treasury and that this function does not constitute a procurement or contract within the meaning of the Federal Property and Administrative Services Act or the Federal Acquisition Regulation (FAR). We recognize that the lockbox agreements are not procurements for purposes of the act or the FAR, but we did not change the language used in the report for ease of reference. It should be noted that Treasury also uses contract terminology in discussing lockbox agreements. Specifically, the Treasury Financial Manual gives FMS “the exclusive authority to contract for lockbox services with the selected bank and the agency” and further states that “an agency is prohibited from entering into new contractual agreements … without the prior approval of FMS.” In addition, in the IRM, IRS defines a lockbox depositary agreement as a “contractual agreement signed by IRS, FMS and the Lockbox that provides the requirements of the activities performed as the commercial depositories.” Our use of contract terminology in this report is consistent with Treasury’s use of such terminology in the TFM and the IRM. We did add a footnote (see footnote 2) to clarify that while the agreements with the lockbox banks are legally binding, they are not procurements subject to the provisions of the Federal Property and Administrative Services Act or the FAR, and to indicate that we use the terms “contracts” and “contractors” in the report for ease of reference. In addition to those named above, Larry Dandridge, Marshall Hamlett, Aaron Holling, Jeffrey Jacobson, Casey Keplinger, Laurie King, Delores Lee, Yola Lewis, Larry Malenich, Julia Matta, Tom Short, and Esther Tepper made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Lockbox banks are commercial banks that process certain taxpayer receipts on behalf of the Internal Revenue Service (IRS). Following an incident at a lockbox site during 2001, which involved the loss and destruction of about 78,000 tax receipts totaling more than $1.2 billion, the Senate Committee on Finance asked GAO to examine whether (1) provisions of the contracts under which lockbox banks operate address previously identified problems or might contribute to mishandling of tax receipts, (2) oversight of lockbox banks is adequate, (3) internal controls are sufficient, and (4) IRS and Treasury's Financial Management Service (FMS) had considered the costs and benefits of contracting out the functions performed by lockbox banks. FMS has contractual agreements with four lockbox banks, which operate 11 lockbox sites at nine locations on IRS's behalf. Of the more than $2 trillion in tax receipts that IRS collected in fiscal year 2002, lockbox banks processed approximately $268 billion. The findings of GAO's study include the following: (1)Nothing inherent in the lockbox contractual agreements would necessarily contribute to mishandling of tax receipts. Although a desire to avoid negative consequences, such as financial or other penalties allowed for by the agreements, could motivate bank employees to make poor decisions, penalty provisions are necessary to help the government address inadequate performance. The results of an ongoing investigation of the 2001 incident may help IRS and FMS determine whether new provisions or modifications to existing provisions are needed. (2) Although IRS and FMS have significantly increased their presence at lockbox sites, oversight of lockbox banks during fiscal year 2002 was not fully effective to ensure that taxpayer data and receipts were adequately safeguarded and properly processed. Inadequate oversight resulted mainly from a lack of clear oversight directives and policies; failure to perform key oversight functions; and conflicting roles and responsibilities of IRS personnel responsible for day-to-day oversight of lockbox banks. (3) Internal controls, including physical security controls, need to be strengthened at IRS lockbox locations. In addition, the processing guidelines under which IRS lockbox banks operate need to be revised to improve receipt-processing controls, employment screening, and courier security. (4) IRS and FMS have not performed a comprehensive study of the costs and benefits of using lockbox banks. The most recent study, in 1999, omitted some costs that may have affected the result. For example, the study did not consider opportunity costs--benefits foregone that might have resulted from alternative uses of the money. Because of these omissions and several changes that have affected costs and benefits, a new study will be needed before lockbox contracts expire in 2007.
FTA generally funds New Starts projects through FFGAs, which are required by statute to establish the terms and conditions for federal participation in a New Starts project. FFGAs also define a project’s scope, including the length of the system and the number of stations; its schedule, including the date when the system is expected to open for service; and its cost. For projects to obtain FFGAs, New Starts projects must emerge from a regional, multimodal transportation planning process. The first two phases of the New Starts process—systems planning and alternatives analysis—address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different options, such as rail lines or bus routes, in a specific corridor versus a region. The alternatives analysis phase results in the selection of a locally preferred alternative, which is the New Starts project that FTA evaluates for funding. After a locally preferred alternative is selected, the project sponsor submits an application to FTA for the project to enter the preliminary engineering phase. When this phase is completed and federal environmental requirements under the National Environmental Policy Act are satisfied, FTA may approve the project’s advancement into final design, after which FTA may recommend the project for an FFGA and advance the project into construction. FTA oversees grantees’ management of projects from the preliminary engineering phase through the construction phase. To help inform administration and congressional decision makers about which projects should receive federal funds, FTA currently distinguishes among proposed projects by evaluating and assigning ratings to various statutory evaluation criteria—including both local financial commitment and project justification criteria—and then assigning an overall project rating. (See fig. 1.) These evaluation criteria reflect a range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system. FTA has developed specific measures for each of the criteria outlined in the statute. On the basis of these measures, FTA assigns the proposed project a rating for each criterion and then assigns a summary rating for local financial commitment and project justification. These two ratings are averaged together to create an overall rating, which is used in conjunction with a determination of the project’s “readiness” for construction to determine what projects are recommended for funding. Projects are rated at several points during the New Starts process—as part of the evaluation for entry into the preliminary engineering and the final design phases, and yearly for inclusion in the New Starts Annual Report. As required by statute, the administration uses the FTA evaluation and rating process, along with the development phases of New Starts projects, to decide which projects to recommend to Congress for funding. Numerous changes have been made to the New Starts program over the last decade. These changes include statutory, regulatory, and administrative changes to the program. For example, we reported in 2005 that FTA had implemented 16 changes to the New Starts application, evaluation, rating, and project development oversight process since the fiscal year 2001 evaluation cycle. Additional changes have been made to the program since 2005. Examples of these changes made to the program over the last 10 years, in chronological order, include the following. New data collection requirements: Starting with the fiscal year 2004 evaluation cycle, FTA required project sponsors seeking an FFGA to submit a plan for the collection and analysis of information to determine the impacts of the project and the accuracy of the forecasts that were prepared during project planning and development. SAFETEA-LU subsequently codified this “before and after” study requirement. Evaluation measures revised: FTA revised its cost-effectiveness and mobility improvements criteria by adopting the Transportation System User Benefits (TSUB) measure that includes benefits for both new and existing transit system riders. Although project sponsors generally view the new cost-effectiveness measure of cost per hour of TSUB as an improvement over the previous measure of cost per new rider, we have reported that some project sponsors have had difficulties correctly calculating the TSUB value for their projects, resulting in delays and additional costs as they conduct multiple iterations of the TSUB measure. New analysis requirement added: Starting with the fiscal year 2005 evaluation cycle, FTA required project sponsors to complete risk assessments. Since implementation, the form and timing of the risk assessments have evolved since 2003, but the intent of the assessments remains to identify the issues that could affect the project’s schedule or cost. Policy on funding recommendations changed: In 2005, the administration informed the transit community that it would target its funding recommendations to projects that achieve a cost-effectiveness rating of medium or higher. Previously, the administration had recommended projects for funding that had lower cost-effectiveness ratings, if they met all other criteria. New programs established: SAFETEA-LU established the Small Starts program, a new capital investment grant program, simplifying the requirements imposed for those seeking funding for lower-cost projects. This program is intended to advance smaller-scale projects through an expedited and streamlined evaluation and rating process. FTA subsequently introduced a separate eligibility category within the Small Starts program for Very Small Starts projects. Very Small Starts projects qualify for an even simpler evaluation and rating process. New evaluation criteria introduced: Given past concerns that the evaluation process did not account for a project’s impact on economic development, SAFETEA-LU added economic development to the list of project justification criteria that FTA must use to evaluate and rate New Starts projects. Although the impetus for each change varied, FTA officials stated that, in general, all of the changes the agency has initiated were intended to make the process more rigorous, systematic, and transparent. This increased rigor, in turn, helps FTA and project sponsors deliver more New Starts projects within budget and on time, according to FTA. However, frequent changes to the New Starts program create challenges for project sponsors. For example, we have previously reported that some project sponsors told us that FTA did not create clear expectations or provide sufficient guidance about certain changes. In addition, we reported that project sponsors said some changes made the application process more expensive and required them to spend significantly more time to complete the application. We have heard similar concerns from project sponsors during our ongoing review. Specifically, some project sponsors we interviewed told us that they have had to redo completed analyses because FTA applies regulatory and administrative changes to projects in the pipeline. In general, according to project sponsors and other stakeholders we have spoken to, this rework adds time and costs to completing the New Starts project development process. FTA currently assigns a 50 percent weight to both the cost-effectiveness and the land use criteria when developing the project justification summary rating. The other project justification criteria are not weighted, although the mobility improvements criterion is used as a “tiebreaker.” FTA officials have told us that they do not currently use the environmental benefits and operating efficiencies criteria in determining the project justification summary rating because the measures do not, as currently structured, provide meaningful distinctions among competing New Starts projects. FTA does not use the economic development criterion because of difficultly developing a measure that is separate and distinct from the land use criterion. We have found in the past that many project sponsors had similar views, noting that individual projects are too small to have much impact, in terms of, for example, air quality, on the whole region or the whole transit system. In contrast, FTA officials have told us that the cost-effectiveness and land use measures help to make meaningful distinctions among projects. For example, according to FTA, existing transit supportive land use plans and policies demonstrate an area’s commitment to transit and are a strong indicator of a project’s future success. Furthermore, according to many FTA officials, experts, and the literature we have consulted, FTA’s cost-effectiveness measure accounts for most secondary project benefits, including economic development, because these benefits are typically derived from mobility improvements that reduce users’ travel times. Therefore, developing new measures for these other criteria may result in the double-counting of certain project benefits. However, in 2008, we reported that FTA’s evaluation measures could be underestimating total project benefits. FTA’s measure of cost- effectiveness, for instance, considers how the mobility improvements from a proposed project will reduce users’ travel times. Although this measure can capture most secondary project benefits, it does not account for benefits for non-transit users (e.g., highway travel time savings) or capture any economic development benefits that are not directly correlated to mobility improvements. The omission of these benefits means proposed projects that convey significant travel time savings for motorists, for example, are not recognized in the selection process. Beyond the cost- effectiveness measure, we reported that project sponsors and experts expressed frustration that FTA does not include certain criteria in the calculation of project ratings, such as economic development and environmental benefits. They noted that this practice limits the information captured on projects, particularly since these are important benefits of transit projects at the local level. As a result, FTA may be underestimating projects’ total benefits, particularly in areas looking to use these projects as a way to relieve congestion or promote more high- density development. In these cases, however, the extent to which FTA’s current approach to estimating benefits affects how projects are ranked in FTA’s evaluation and ratings process is unclear. FTA officials have acknowledged these limitations, but noted that improvements in local travel models are needed to resolve some of these issues. In particular, many local models used to estimate future travel demand for New Starts are incapable of reliably estimating highway travel time savings as a result of the proposed project, according to FTA officials. There is great variation in the models local transportation planning agencies use to develop travel forecasts (which underlie many of the New Starts measures), producing significant variation in forecast quality and limiting the ability to assess quality against the general state of practice. In 2008, we made a series of recommendations designed to address the limitations of FTA’s current evaluation process, including recommending that (1) the Secretary of Transportation seek additional resources to improve local travel models in the next authorizing legislation to improve the New Starts evaluation process and the measures of project benefits; (2) FTA establish a timeline for issuing, awarding, and implementing the result of its request for proposals on short- and long-term approaches to measuring highway user benefits from transit improvements; (3) the Administrators of FTA and Federal Highway Administration collaborate in efforts to improve the consistency and reliability of local travel models; and (4) the Administrator of FTA establish a timeline for initiating and completing its longer-term effort to develop more robust measures of transit projects’ environmental benefits. FTA is working to address these recommendations. For instance, FTA conducted a colloquium on environmental benefits of transit projects in October 2008, which resulted in a discussion paper on the evaluation of economic development. Further, in a Federal Register Notice published on January 26, 2009, FTA issued and sought comments on a discussion paper on new ways of evaluating economic development effects. FTA is now reviewing comments on that paper. In May 2009, FTA also took steps to address concerns about the exclusion of some project justification criteria from the evaluation process. In a Notice of Availability for New Starts and Small Starts Policies and Procedures and Requests for Comments in the Federal Register, FTA proposed changing the weights assigned for the project justification criteria for New Starts projects. Specifically, FTA proposes to set the weights at 20 percent each for the mobility, cost-effectiveness, land use, and economic development criteria, and 10 percent each for operating efficiencies and environmental benefits. According to FTA, these changes reflect statutory direction that project justification criteria should be given “comparable, but not necessarily equal, numerical weight” in calculating the overall project rating. FTA is currently soliciting public comments on these proposed changes. We reported in 2008 that experts and some project sponsors we spoke with generally support FTA’s quantitatively rigorous process for evaluating proposed transit projects but are concerned that the process has become too burdensome and complex, and as noted earlier, may underestimate certain project benefits. For example, several experts and transportation consultants told us that although it is appropriate to measure the extent to which transit projects create primary and secondary benefits, such as mobility improvements and economic development, it is difficult to quantify all of these projected benefits. Additionally, several project sponsors noted that the complexity of the evaluation process can necessitate hiring consultants to handle the data requests and navigate the application process—which could increase the project’s costs. Our previous reviews of the New Starts program have noted similar concerns from project sponsors. For example, in 2007, we reported that a majority of project sponsors told us that the complexity of the requirements—such as the analysis and modeling required for travel forecasts—creates disincentives for entering the New Starts pipeline. Sponsors also said that the expense involved in fulfilling the application requirements, including the costs of hiring additional staff and consultants, discourages agencies with less resources from applying for this funding. In response to such concerns, FTA has tried to simplify the evaluation process in several ways. For example, as previously mentioned, FTA established the Very Small Starts eligibility category within the Small Starts program for projects less than $50 million in total cost. This program further simplifies the application requirements in place for the Small Starts program, which funds lower-cost projects, such as bus rapid transit and streetcar projects. Additionally, in its New Starts program, FTA no longer rates projects on the operating efficiencies criterion because, according to FTA, operating efficiencies are already sufficiently captured in FTA’s cost-effectiveness measures, and the measure did not adequately distinguish among projects. Thus, projects no longer have to submit information on operating efficiencies. Likewise, FTA no longer requires project sponsors to submit information on environmental benefits because it found that the information gathered did not adequately distinguish among projects and that EPA’s ambient air quality rating was sufficient. FTA also commissioned a study by Deloitte in June 2006 to review the project development process and identify opportunities for streamlining or simplifying the process. This study identified a number of ways that FTA’s project development process could be streamlined, including revising the policy review and issuance cycle to minimize major policy and guidance changes to every 2 years and conducting a human capital assessment to identify skill gaps and opportunities for reallocating resources in order to enhance FTA’s ability to review and assist New Starts projects in a timely and efficient manner. According to FTA, the agency has implemented 75 percent of Deloitte’s recommendations; some of the other recommendations are on hold pending the upcoming reauthorization of the surface transportation program, including the New Starts program. As part of our ongoing work, we are reviewing existing research, including past GAO reports, analyzing data on the length of time it takes for projects to complete the New Starts process, and interviewing project sponsors, industry stakeholders and consultants, and transportation experts to identify options to expedite project development in the New Starts program. Using these sources, we have preliminarily identified the following options. While each option could help expedite project development, each option has advantages and disadvantages to consider and some options could require legislative changes. In addition, each option would likely require certain trade-offs, namely reducing the level of rigor in the evaluation process in exchange for a more streamlined process. The discussion that follows is not intended to endorse any potential option, but instead to describe some potential options for expediting project development. We will continue to work with FTA and other stakeholders to identify other options as well as examine the merits and challenges of all identified options for inclusion in our report later this summer. Tailor the New Starts evaluation process to risks posed by the projects: Project sponsors, consultants, and experts we interviewed suggested that FTA adopt a more risk-based evaluation process for New Starts projects based on the project’s costs or complexity, the federal share of the project’s costs, or the project sponsor’s New Starts experience. For example, FTA could align the level of oversight with the proposed federal share of the project—that is, the greater the financial exposure for the federal government, the greater the level of oversight. Similarly, FTA could reduce or eliminate certain reviews for project sponsors who have successfully developed New Starts projects in the past, while applying greater oversight to project sponsors who have no experience with the New Starts process. We have noted the value in using risk-based approaches to oversight. For example, we have previously reported that assessing risks can help agencies allocate finite resources and help policy makers make informed decisions. By adopting a more risk-based approach, FTA could allow select projects to move more quickly through the New Starts process and more efficiently use its scarce resources. However, the trade-off of not applying all evaluation measures to every project is that FTA could miss the opportunity to detect problems early in the project’s development. Consider greater use of letters of intent and early systems work agreements: The linear, phased evaluation process of the New Starts program hampers project sponsors’ ability to utilize alternative project delivery methods, such as design-build, according to project sponsors. These alternative project delivery methods have the potential to develop a project cheaper and quicker than traditional project delivery methods can. However, project sponsors told us it is difficult to attract private sector interest early enough in the process to use alternative project delivery methods because there is no guarantee that the project will ultimately receive federal funding through the New Starts program. The Deloitte study also noted that New Starts project sponsors miss the opportunity to use alternative project delivery methods because of the lack of early commitment of federal funding for the projects. To encourage the private sector involvement needed, project sponsors, consultants, and experts we interviewed suggested that FTA use letters of intent or early systems work agreements. Through a letter of intent, FTA announces its intention to obligate an amount from future available budget authority to a project. A challenge of using letters of intent is that they can be misinterpreted as an obligation of federal funds, when in fact they only signal FTA’s intention to obligate future funds should the project meet all New Starts criteria and requirements. In contrast, an early systems work agreement obligates an amount of available budget authority to a project. The challenge of using an early systems work agreement is that FTA can only use these agreements with projects that will be granted an FFGA, thus limiting FTA’s ability to use these agreements for projects in the pipeline. Consistently use road maps or similar project schedules: Project sponsors said that FTA should more consistently use road maps or similar tools to define the project sponsor’s and FTA’s expectations and responsibilities for moving the project forward. Without establishing these expectations, project sponsors have little information about how long it will take FTA to review its request to move from alternatives analysis to preliminary engineering, for example. This lack of information makes it difficult for the project sponsor to effectively manage the project. Given the benefits of clearly setting expectations, Deloitte recommended that FTA use road maps for all projects. FTA has used road maps for select projects, but the agency does not consistently use them for all projects. A limitation of using road maps is that expected time frames are subject to change—that is, project schedules often change as a project evolves throughout the development process. Furthermore, every project is unique, making it difficult to set a realistic time frame for each phase of development. Consequently, the road maps can provide only rough estimates of expected time frames. Combine two or more project development phases: Project sponsors and consultants told us that waiting for FTA’s approval to enter preliminary engineering, final design, and construction can cause delays. While FTA determines whether a project can advance to the next project development phase, work on the project essentially stops. Project sponsors can advance the project at their own risk, meaning they could have to redo the work if FTA does not subsequently approve an aspect of the project. The amount of time it takes for FTA to determine whether a project can advance can be significant. For example, one project sponsor told us that FTA’s review of its application to advance from alternatives analysis to preliminary engineering took 8 months; about the same amount of time it took the project sponsor to complete alternatives analysis. FTA officials told us the length of time for reviews depends on a number of factors, most importantly the completeness and accuracy of the project sponsor’s submissions. To reduce the “start/stop” phenomena project sponsors described, FTA could seek a legislative change to combine two or more of the statutorily required project development phases—for example, combining the preliminary engineering and final design phases. The Deloitte study also recommended that FTA redefine or more clearly define the project phases to more accurately reflect FTA’s current requirements and to better accommodate alternative delivery methods. Apply changes only to future projects: Project sponsors told us that the frequent changes to the New Starts program can result in additional costs and delays as project sponsors are required to redo analyses to reflect the changes. In an attempt to create a process that provides more stability for project sponsors, in May 2006, FTA modified its policy to allow a project that has been approved for entry into final design not be subject to changes in the New Starts policy and guidance. However, this policy change does not apply to projects approved for entry into preliminary engineering, which is the New Starts project development phase that has the most requirements for project sponsors and the phase where project sponsors told us that frequent changes to the project by sponsors and to the New Starts process by FTA result in additional costs and delays. Furthermore, another project sponsor noted that new requirements cause delays because each element of a proposed project is interrelated, so changing one requirement can stop momentum on a project. To avoid this rework, some project sponsors, consultants, and experts we interviewed suggested that FTA apply changes only to future projects, not projects currently in preliminary engineering. However, by not applying changes to projects in preliminary engineering, FTA could miss the opportunity to enhance its oversight of these projects. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at this time. For further information on this testimony, please contact A. Nicole Clowers, Acting Director, Physical Infrastructure Issues, at (202) 512-2834, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony were Kyle Browning, Gary Guggolz, Raymond Sendejas, and Carrie Wilks. Public Transportation: Improvements Are Needed to More Fully Assess Predicted Impacts of New Starts Projects. GAO-08-844. Washington, D.C.: July 25, 2008. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Opportunities Exist to Improve the Communication and Transparency of Changes Made to the New Starts Program. GAO-05-674. Washington, D.C.: June 28, 2005. Mass Transit: FTA Needs to Better Define and Assess Impact of Certain Policies on New Starts Program. GAO-04-748. Washington, D.C.: June 25, 2004. Mass Transit: FTA Needs to Provide Clear Information and Additional Guidance on the New Starts Ratings Process. GAO-03-701. Washington, D.C.: June 23, 2003. Mass Transit: FTA’s New Starts Commitments for Fiscal Year 2003. GAO-02-603. Washington, D.C.: April 30, 2002. Mass Transit: FTA Could Relieve New Starts Program Funding Constraints. GAO-01-987. Washington, D.C.: August 15, 2001. Mass Transit: Implementation of FTA’s New Starts Evaluation Process and FY 2001 Funding Proposals. GAO/RCED-00-149. Washington, D.C.: April 28, 2000. Mass Transit: Status of New Starts Transit Projects With Full Funding Grant Agreements. GAO/RCED-99-240. Washington, D.C.: August 19, 1999. Mass Transit: FTA’s Progress in Developing and Implementing a New Starts Evaluation Process. GAO/RCED-99-113. Washington, D.C.: April 26, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The New Starts program is an important source of new capital investment in mass transportation. As required by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users, the Federal Transit Administration (FTA) must prioritize transit projects for funding by evaluating, rating, and recommending projects on the basis of specific financial commitment and project justification criteria, such as cost-effectiveness, economic development effects, land use, and environmental benefits. To be eligible for federal funding, a project must advance through the different project development phases of the New Starts program, including alternatives analysis, preliminary engineering, and final design. Using the statutorily identified criteria, FTA evaluates projects as a condition for advancement into each project development phase of the program. This testimony discusses the (1) key challenges associated with the New Starts program and (2) options that could help expedite project development in the New Starts program. This testimony is based on GAO's extensive body of work on the New Starts program and ongoing work--as directed by Congress. For this work, GAO reviewed FTA documents and interviewed FTA officials, sponsors of New Starts projects, and representatives from industry associations. The FTA reviewed the information in this testimony and provided technical comments. Previous GAO work has identified three key challenges associated with the New Starts program. First, frequent changes to the New Starts program have sometimes led to confusion and delays. Numerous changes have been made to the New Starts Program over the last decade, such as revising and adding new evaluation criteria and requiring project sponsors to collect new data and complete new analyses. Although FTA officials told GAO that changes were generally intended to make the process more rigorous, systematic, and transparent, project sponsors said the frequent changes sometimes caused confusion and rework, resulting in delays in advancing projects. Second, the current New Starts evaluation process measures do not capture all project benefits. For example, FTA's cost-effectiveness measure does not account for highway travel time savings and may not capture all economic development benefits. FTA officials have acknowledged these limitations, but noted that improvements in local travel models are needed to resolve some of these issues. FTA is also conducting research on ways to improve certain evaluation measures. Third, striking the appropriate balance between maintaining a robust evaluation and minimizing a complex process is challenging. Experts and some project sponsors GAO spoke with generally support FTA's quantitatively rigorous process for evaluating proposed transit projects but are concerned that the process has become too burdensome and complex. In response to such concerns, FTA has tried to simplify the evaluation process in several ways, including hiring a consulting firm to identify opportunities to streamline or simplify the process. As part of ongoing work, GAO has preliminarily identified options to help expedite project development within the New Starts program. These options include tailoring the New Starts evaluation process to risks posed by the projects, using letters of intent more frequently, and applying regulatory and administrative changes only to future projects. While each option could help expedite project development in the New Starts process, each option has advantages and disadvantages to consider. For example, by signaling early federal support of projects, letters of intent and early systems work agreements could help project sponsors use potentially less costly and time-consuming alternative project delivery methods, such as design-build. However, such early support poses some risk, as projects may stumble in later project development phases. Furthermore, some options, like combining one or more statutorily required project development phases, would require legislative action.
Today there is widespread frustration with the budget process. It is attacked as confusing, time-consuming, burdensome, and repetitive. In addition, the results are often disappointing to both participants and observers. Although frustration is nearly universal, there is less agreement on what specific changes would be appropriate. This is not surprising. It is in the budget debate that the government determines in which areas it will be involved and how it will exercise that involvement. Disagreement about the best process to reach such important decisions and how to allocate precious resources is to be expected. We have made several proposals based on a good deal of GAO work on the budget, including the structure of the budget and the budget process.These proposals emphasize the need to improve the recognition of the long-term impact of today’s budget decisions and advance steps to strengthen or better ensure accountability. In previous reports and testimonies, we have said that the nation’s economic future depends in large part upon today’s budget and investment decisions. Therefore, it is important for the budget to provide a long-term framework and be grounded in a linkage of fiscal policy with the long-term economic outlook. This would require a focus both on overall fiscal policy and on the composition of federal activity. In previous reports, we have cautioned that the objective of enhancing long-term economic growth through overall fiscal policy is not well served by a budget process which focuses on the short-term implications of various spending decisions. It is important to pay attention to the long-term overall fiscal policy path, to the longer-term implications of individual programmatic decisions, and to the composition of federal spending. very short term, planning for longer-range economic goals requires exploring the implications of budget decisions well into the future. By this, we do not mean detailed budget projections could be made over a 30-year time horizon, but it is important to recognize that for some programs a long-term perspective is critical to understanding the fiscal and spending implications of a decision. The current 5-year time horizon may work well for some programs, but for retirement programs, pension guarantees, and mortgage-related commitments—for example—a longer-time horizon is necessary. Although the surest way of increasing national savings and investment would be to reduce federal dissaving by eliminating the deficit, the composition of federal spending also matters. We have noted that federal spending can be divided into two broad categories based on the economic impact of that spending—consumption spending having a short-term economic impact and investment spending intended to have a positive effect on long-term private sector economic growth. We have argued that the allocation of federal spending between investment and consumption is important and deserves explicit consideration. However, the current budget process does not prompt the executive branch or the Congress to make explicit decisions about how much spending should be for long-term investment. The budget functions along which the resolution is structured represent one categorization by “mission,” but they are not subdivided into consumption and investment. Appropriations subcommittees provide funding by department and agency in appropriations accounts that do not distinguish between investment and consumption spending. In short, the investment/consumption decision is not one of the organizing themes for the budget debate. We have suggested that an appropriate and practical approach to supplement the budget’s focus on macroeconomic issues would be to incorporate an investment component within the discretionary caps set by BEA. Such an investment component would direct attention to the trade-offs between consumption and investment but within the overall fiscal discipline established by the caps. It would provide policymakers with a new tool for setting priorities between the long term and the short term. Within the declining unified budget deficit path, a target for investment spending could be established for the appropriate level of investment to ensure that it is considered formally in the budget process. In addition to changes aimed at improving the focus on the long term, we have continued to emphasize the importance of enforceability, accountability, and transparency. We describe these three elements together because it is difficult to have accountability without an enforcement mechanism and without transparency to make the process understandable to those outside it. Accountability in this context has several dimensions: accountability for the full costs of commitments that are to be made and accountability for actions taken—which requires targeting enforcement to actions. In addition, it may encompass the broader issue of taking responsibility for responding to unexpected events. Transparency is important not only because in a democracy the budget debate should be accessible to the citizenry but also because without it, there can be little ultimate accountability to the public. In this area, as in others I discuss today, there has been progress. For example, enforcement provisions in BEA have worked within their scope: the discretionary caps and controls on expanding entitlements have held. The design of the law has provided accountability for the costs of actions taken and for compliance with rules. However, accountability for the worse-than-expected deficits in the past has been diffuse. For credibility and for success, we need to consider bringing more responsibility for the results of unforeseen actions into the system. We have previously suggested that Congress might want to consider introducing a “lookback” into its system of budgetary controls. Under such a process, the current Congressional Budget Office (CBO) deficit projections would be compared to those projected at the time of a prior deficit reduction agreement and/or the most recent reconciliation legislation. For a difference exceeding a predetermined amount, the Congress would decide explicitly—through a vote—whether to accept the slippage or to act to bring the deficit path closer to the original goal by mandating actions to narrow this gap. required to recommend whether none, some, or all of the overage should be recouped. The Congress could be required to vote either on the President’s proposal or an alternative one. Neither of these “lookback” processes determine an outcome; both seek to increase accountability for decisions about the path of federal spending. Taken together, the changes we have suggested, which could be made within the current budget process, would move us toward increased focus on important decisions and increased accountability for those decisions. Also, as discussed below, additional financial reporting and management reforms underway hold tremendous potential for helping to improve greatly the quality of information available to further enhance budget decision-making. The budget should be formulated using accurate and reliable financial data on actual spending and program performance. Audited financial statements and reports ought to be the source of these data. Ideally, we should expect such reports to address (1) the full costs of achieving program results, (2) the value of what the government owns and what it owes to others, (3) the government’s ability to satisfy future commitments if current policies were continued, and (4) the government’s ability to detect and correct problems in its financial systems and controls. Unfortunately, financial accounting information to date has not always been reliable enough to use in federal decision-making or to provide the requisite public accountability for the use of taxpayers’ money. Good information on the full costs of federal operations is frequently absent or extremely difficult to reconstruct and reliable information on federal assets and liabilities is all too often lacking. While GAO has been actively urging improvements in this area for over 20 years, complete, useful financial reporting is not yet in place. The good news is that tools are now being put in place that promise to get the federal government’s financial house in order. First, beginning for fiscal year 1996, all major agencies, covering about 99 percent of the government’s outlays, are required to prepare annually financial statements and have them audited. Second, an audited governmentwide financial statement is required to be produced every year starting with fiscal year 1997. Third, FASAB is recommending new federal accounting standards that will yield more useful and relevant financial statements and information. The basis for much of this progress is the CFO Act’s requirements for annual financial statement audits. Audits for a select group of agencies under the Act’s original pilot program highlighted problems of uncollected revenues and billions of dollars of unrecognized liabilities and potential losses from such programs as housing loans, veterans compensation and pension benefits, and hazardous waste cleanup. Such audits are bringing important discipline to agencies’ financial management and control systems. Thanks to the benefits achieved from these pilot audits, the Congress extended this requirement, in the 1994 Government Management Reform Act, to the government’s 24 major departments and agencies. That act also mandated an annual consolidated set of governmentwide financial statements—to be audited by GAO—starting for fiscal year 1997. These statements will provide an overview of the government’s overall costs of operations, a balance sheet showing the government’s assets and liabilities, and information on its contribution to long-term economic growth and the potential future costs of current policies. These reports will provide policymakers and the public valuable information to assess the sustainability of federal commitments. The CFO Act also went beyond these auditing and reporting requirements to spell out an agenda of other long overdue reforms. It established a CFO structure in 24 major agencies and the Office of Management and Budget (OMB) to provide the necessary leadership and focus. It also set expectations for the deployment of modern systems to replace existing antiquated, often manual, processes; the development of better performance and cost measures; and the design of results-oriented reports on the government’s financial condition and operating performance by integrating budget, accounting, and program information. measures into the reports and developing reports more specifically tailored to the government’s needs. The creation of FASAB was the culmination of many years of effort to achieve a cooperative working relationship between the three principal agencies responsible for overall federal financial management—OMB, Treasury, and GAO. Its establishment represents a major stride forward because financial management can only improve if these principal agencies involved in setting standards, reporting, and auditing work together. As you know, FASAB was established in October 1990 by the Secretary of the Treasury, the Director of OMB, and me to consider and recommend accounting principles for the federal government. The 9-member board is comprised of representatives from the three principals, CBO, the Department of Defense, one civilian agency (presently Energy), and three representatives from the private sector, including the Chairman, former Comptroller General Elmer B. Staats. FASAB recommends accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, other users of federal financial information and comments from the public. OMB, Treasury, and GAO then decide whether to adopt the recommended standards; if they do, the standards are published by GAO and OMB and become effective. FASAB will soon complete the federal government’s first set of comprehensive accounting standards developed under this consensus approach. Key to the FASAB approach for developing these standards was extensive consultation with users of financial statements early in its deliberations to ensure that the standards will result in statements that are relevant to both the budget process as well as agencies’ accountability for resources. Users were interested in getting answers to questions on such topics as: Budgetary integrity—What legal authority was provided to finance government activities and was it used correctly? Operating performance—How much do programs cost and how were they financed? What was achieved? What are the government’s assets and are they well managed? What are its liabilities and how will they be paid for? Stewardship—Has the government’s overall financial capacity to satisfy current and future needs and costs improved or deteriorated? What are its future commitments and are they being provided for? How will the government’s programs affect the future growth potential of the economy? Systems and control—Does the government have sufficient controls over its programs so that it can detect and correct problems? The FASAB principals have approved eight basic standards and statements, which I will refer to as FASAB standards in my testimony today, and approval of the final one for revenue accounting is expected this spring. This will complete the body of basic accounting and cost accounting standards for all federal agencies to use in preparing financial reports and developing meaningful cost information. The basic standards and statements are: Objectives of Federal Financial Reporting—A statement of general concepts on the objectives of financial reporting by the U.S. government providing the basic framework for the Board’s work. Entity and Display—A statement of general concepts on how to define federal financial reporting entities and what kinds of financial statements those entities should prepare. Managerial Cost Accounting Concepts and Standards—A statement of general concepts combined with a statement of specific standards emphasizing the need to relate cost information with budget and financial information to provide better information for resource allocation and performance measurement. Accounting for Selected Assets and Liabilities—A statement of specific standards for accounting for basic items such as cash, accounts receivable, and accounts payable. Accounting for Direct Loans and Loan Guarantees—A statement of accounting standards responding to the Credit Reform Act of 1990. Accounting for Inventory and Related Property—A statement of standards for accounting for inventories, stockpiled materials, seized and forfeited assets, foreclosed property, and goods held under price support programs. Accounting for Liabilities of the Federal Government—A statement of standards for federal insurance and guarantee programs, pensions and post-retirement health care for federal workers, and other liabilities, including contingent liabilities. Accounting for Property, Plant and Equipment—A statement of standards for accounting for the various types of property (including heritage assets), plant and equipment held by the government. Accounting for Revenue and Other Financing Sources—A statement of standards for accounting for inflows of resources (whether earned, demanded, or donated) and other financing sources. A standard for stewardship reporting is also scheduled for completion this spring. While not part of the package of basic standards, it will help inform decisionmakers about the magnitude of federal resources and financial responsibilities and the federal stewardship role over them. The standards and new reports are being phased in over time. Some are effective now; all that have been issued will be effective for fiscal year 1998. OMB defines the form and content of agency financial statements in periodic bulletins to agency heads. The most recent guidance incorporates FASAB standards for selected assets and liabilities, credit programs, and inventory. In the fall, OMB will be issuing new guidance reflecting the rest of the FASAB standards. Since the enactment of the CFO Act, OMB’s form and content guidance has stressed the use of narrative “Overview” sections preceding the basic financial statements as the best way for agencies to relate mission goals and program performance measures to financial resources. Each financial statement includes an Overview describing the agency, its mission, activities, accomplishments, and overall financial results and condition. It also should discuss what, if anything, needs to be done to improve either program or financial performance, including an identification of programs or activities that may need significant future funding. OMB also requires that agency financial statements include a balance sheet, a statement of operations, and a statement reconciling expenses reported on the statement of operations to related amounts presented in budget execution reports. Based on FASAB’s standards, OMB is making efforts to design new financial reports that contain performance measures and budget data to provide a much needed, additional perspective on the government’s actual performance and its long-term financial prospects. Financial reports based on FASAB’s standards will provide valuable information to help sort out various kinds of long-term claims. The standards envision new reports on a broad range of liabilities and liability-like commitments and assets and asset-like spending. Liabilities, such as the federal debt, would be reported on a balance sheet, along with assets owned by federal agencies, like buildings. recognition as liabilities on the balance sheet. FASAB is still considering what types of estimates would be most useful if stewardship reporting is applied to social insurance. To give a picture of the government’s capacity to sustain current public services, stewardship reporting will also include 6-year projections of receipt and outlay data for all programs based on data submitted for the President’s budget. Stewardship reports based on FASAB standards would also provide information on federal investments intended to have future benefits for the nation, thus providing actual data on the budget’s investment component that GAO has recommended and which I discussed earlier. Stewardship reporting would cover federal investments and some performance information for programs intended to improve the nation’s infrastructure, research and development, and human capital due to their potential contribution to the long-term productive capacity of the economy. These kinds of activities would not be reflected on the balance sheet because they are not assets owned by the federal government but rather programs and subsidies provided to state and local governments and the private sector for broader public purposes. Stewardship reporting recognizes that, although these investments lack the traditional attributes of assets, such programs warrant special analysis due to their potential impact on the nation’s long-term future. Linking costs to the reported performance levels is the next challenge. FASAB’s cost accounting standards—the first set of standards to account for costs of federal government programs—will require agencies to develop measures of the full costs of carrying out a mission or producing products or services. Thus, when implemented, decisionmakers would have information on the costs of all resources used and the cost of support services provided by others to support activities or programs—and could compare these costs to various levels of program performance. Perseverance will be required to sustain the current momentum in improving financial management and to successfully overcome decades of serious neglect in fundamental financial management operations and reporting methods. Implementing FASAB standards will not be easy. FASAB has allowed lead time for implementing the standards so that they can be incorporated into agencies’ systems. Nevertheless, even with this lead time, agencies may have difficulty in meeting the schedule. It is critical that the Congress and the executive branch work together to make implementation successful. As the federal government continues to improve its accountability and reporting of costs and performance, the more useful and reliable data need to be used to influence decisions. That brings me to the task of better integrating financial data and reports into the budget decision-making process. The ultimate goal of more reliable and relevant financial data is to promote more informed decision-making. For this to happen, the financial data must be understood and used by program managers and budget decisionmakers. The changes underway to financial reporting have been undertaken with a goal of making financial data more accessible to these decisionmakers. The budget community’s involvement in the FASAB standard-setting process has contributed to this. Still, the future challenge remains to further integrate financial reports with the budget to enhance the quality and richness of the data considered in budget deliberations. Improving the linkages between accounting and budgeting also calls for considering certain changes in budgeting such as realigned account structures and the selective use of accrual concepts. The chief benefit of improving this linkage will be the increased reliability of the data on which we base our management and budgetary decisions. The new financial reports will improve the reliability of the budget numbers undergirding decisions. Budgeting is a forward-looking enterprise, but it can clearly benefit from better information on actual expenditures and revenue collection. Under FASAB standards, numbers from the budget will be included in basic financial statements and thus will be audited for the first time. Having these numbers audited was one of the foremost desires of budget decisionmakers consulted in FASAB’s user needs study and stems from their suspicion that the unaudited numbers may not always be correct. The new financial reports will also offer new perspectives and data on the full costs of program outputs and agency operations that are currently not reported in the cash-based budget. Information on full costs generated pursuant to the new FASAB standards would provide decisionmakers a more complete picture of actual past program costs and performance when they are considering the appropriate level of future funding. For example, the costs of providing Medicare are spread among at least three budget accounts. Financial reports would pull all the relevant costs together. The different account structures that are used for budget and financial reporting are a continuing obstacle to using these reports together and may prevent decisionmakers from fully benefiting from the information in financial statements. Unlike financial reporting, which is striving to apply the full cost concept when reporting costs, the budget account structure is not based on a single unifying theme or concept. The current budget account structure evolved over time in response to specific needs. The budget contains over 1,300 accounts. They are not equal in size; nearly 80 percent of the government’s resources are clustered in less than 5 percent of the accounts. Some accounts are organized by the type of spending (such as personnel compensation or equipment) while others are organized by programs. Accounts also vary in their coverage of cost, with some including both program and operating spending while others separate salaries and expenses from program subsidies. Or, a given account may include multiple programs and activities. When budget account structures are not aligned with the structures used in financial reporting, additional analyses or crosswalks would be needed so that the financial data could be considered in making budget decisions. If the Congress and the executive branch reexamine the budget account structure, the question of trying to achieve a better congruence between budget accounts and the accounting system structure, which is tied to performance results, should be considered. In addition to providing a new, full cost perspective for programs and activities, financial reporting has prompted improved ways of thinking about costs in the budget. For the most part, the budget uses the cash basis, which recognizes transactions when cash is paid or received. Financial reporting uses the accrual basis, which recognizes transactions when commitments are made, regardless of when the cash flows. Cash-based budgeting is generally the best measure to reflect the short-term economic impact of fiscal policy as well as the current borrowing needs of the federal government. And for many transactions, such as salaries, costs recorded on a cash basis do not differ appreciably from accrual. However, for a select number of programs, cash-based budgeting does not adequately reflect the future costs of the government’s commitments or provide appropriate signals on emerging problems. For these programs, accrual-based reporting may improve budgetary decision-making. The accrual approach records the full cost to the government of a decision—whether to be paid now or in the future. As a result, it prompts decisionmakers to recognize the cost consequences of commitments made today. Accrual budgeting is being done under the Credit Reform Act for credit programs such as the federal family education loan program and the rural electrification and telephone direct loan program. It may be appropriate to extend its use to other programs such as federal insurance programs—an issue we are currently studying at the request of the Chairman, House Budget Committee. Our work to date has revealed shortcomings with cash-based budgeting for insurance programs, but also highlighted difficulties in estimating future costs for some of them due to the lack of adequate data or to sensitivity to the assumptions used to model future costs. The potential distortions arising from the cash-based approach must be weighed against the risks and uncertainties involved in estimating longer-term accrued costs for some programs. Our upcoming report on budgeting for insurance will address these issues. Small changes in the right direction are important, but to make the kind of difference we are all seeking will require pulling all this together for budget and oversight. Thanks in large part to the legislative impetus of the CFO Act and GPRA, decisionmakers will ultimately have available unprecedented, reliable information on both the financial condition of programs and operations as well as the performance and costs of these activities. While these initiatives carry great potential, they require continued support by the agencies and the Congress. GPRA set forth the major steps federal agencies need to take towards a results-oriented management approach. They are to (1) develop a strategic plan, (2) establish performance measures focused on “outcomes” or results expressed in terms of the real difference federal programs make in people’s lives and use them to monitor progress in meeting strategic goals, and (3) link performance information to resource requirements through annual performance plans. I have supported the intent of GPRA and believe that it offers great potential for enhancing decision-making and improving the management of federal programs. A growing number of federal agencies is beginning to see that a focus on outcomes can lead to dramatic improvements in effectiveness. However, our work also has shown that a fundamental shift in focus to include outcomes does not come quickly or easily. The early experiences of many GPRA pilots show that outcomes can be very difficult to define and measure. They also found that a focus on outcomes can require major changes in the services that agencies provide and processes they use to provide those services. Given that the changes envisioned by GPRA do not come quickly or easily, strong and sustained congressional attention to GPRA implementation is critical. Without it, congressional and executive branch decisionmakers may not obtain the information they need as they seek to create a government that is more effective, efficient, and streamlined. Authorization, appropriation, budget, and oversight committees all have key interests in ensuring that GPRA is successful because, once fully implemented, it should provide valuable data to help inform the decisions that each committee must make. OMB has attempted to prompt progress by giving special emphasis in its budget submission guidance to increasing the use of information on program performance in budget justifications. In preparation for the fiscal year 1997 budget cycle, OMB held performance reviews last May with agencies on performance measures and in September 1995 issued guidance on preparing and submitting strategic plans. Further progress in implementing GPRA will occur as performance measures become more widespread and agencies begin to use audited financial information in the budget process to validate and assess agency performance. GAO, OMB, and the CFO Council have also given thought as to how to best report data and information to decisionmakers. While there are a myriad of legislatively mandated reporting requirements under separate laws, such as GPRA, the Federal Managers’ Financial Integrity Act, the CFO Act, and the Prompt Pay Act, decisionmakers need a single report relating performance measures, costs, and the budget. This reporting approach is consistent with the CFO Council’s proposal for an Accountability Report, which OMB is pursuing. On a pilot basis, OMB is having six agencies produce Accountability Reports providing a comprehensive picture of each agency’s performance pursuant to its stated goals and objectives. The ultimate usefulness of the Accountability Report will hinge on its specific content and the reliability of information presented. We will work with OMB and agencies throughout the pilot program. We agree with the overall streamlined reporting concept and believe that, to be most useful, the Accountability Report must include an agency’s financial statements and related audit reports. Accountability reports could then be used as the basis for annual oversight hearings, something I have long advocated. Such serious scrutiny of programs and activities is especially important as we seek to reduce the deficit. Oversight hearings based on complete sets of reports could be the basis for considering changes in federal roles and in program design as well as reviewing the adequacy of agencies’ accountability and performance. Finding the most effective reporting and analytical approaches will require a great deal of collaboration and communication. Appropriations, budget, and authorizing committees need to be full partners in supporting the implementation of these initiatives. The new financial reports based on FASAB’s recommended standards will provide much-needed additional perspective on the long-term prospects for government programs and finances. It can be used with other kinds of actuarial and economic analyses already available in making budget decisions. In conclusion, reforms are needed on three fronts—in the budget process, in accountability and reporting for costs and performance, and in using the improved reports to better inform policy and budget decisions. Improved financial management and reports are essential to improving the government’s ability to provide accountability for public resources. Continuing fiscal pressures will place a premium on the proper stewardship of increasingly scarce public resources. Recent efforts to improve federal financial reporting will, if properly implemented, provide the tools needed to redress long-standing weaknesses. on the current and future stakes involved in our decisions may help policymakers make decisions focused more on the long-term consequences. The public also stands to gain from these initiatives, both from improved accountability for public resources and more informed decisions. Mr. Chairman, this concludes my statement. I would be happy to respond to questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed how the federal government could improve its financial management and budgets. GAO noted that: (1) over the last 6 years, the government has established a solid framework for improving its financial management through legislative mandates, an accounting standards advisory board, and budget process improvements; (2) the budget process should provide a long-term perspective and link fiscal policy to the long-term economic outlook; (3) the Administration and Congress need to make explicit decisions about investment and consumption spending and identify them within the budget; (4) budget enforcement, accountability, and transparency need to be enhanced, possibly through a look-back procedure and particularly in the areas of deficit and mandatory spending; (5) to enhance budget decisionmaking, agency and governmentwide financial statements audits should provide accurate and reliable financial data on actual spending and program performance; (6) the advisory board has approved eight government accounting standards addressing such areas as budget integrity, operating performance, and systems and control and will complete a stewardship standard by the spring of 1996; (7) the Office of Management and Budget is designing new financial reports to increase information on actual performance and long-term financial prospects; and (8) realigning account structures and selective use of accrual concepts in the budget would link accounting and budgeting and improve budget and management decisionmaking by disclosing the full cost of programs and operations.
The number of families receiving welfare cash assistance fell significantly after the creation of TANF, decreasing by almost 50 percent from a monthly average of 3.2 million families in fiscal year 1997 to a low of 1.7 million families in fiscal year 2008 (see fig. 1). Several factors likely contributed to this caseload decline, such as the strong economy of the 1990s, declines in the number of eligible families participating, concurrent policy changes, and state implementation of TANF requirements, including those related to work participation. However, since fiscal year 2008 and the beginning of the recent economic recession, the number of families receiving TANF cash assistance has increased by 13 percent to a monthly average of 1.9 million families in fiscal year 2010. Comparing the types of families that receive TANF cash assistance, the number of two-parent families increased at a faster rate than single-parent families or child-only cases, in which only the children receive benefits, during this time period. The number of child-only cases has increased slightly from fiscal year 2000 to fiscal year 2008; however, these cases make up an increasing proportion of the total number of families receiving cash assistance because TANF cases with adults in the assistance unit have decreased substantially. Specifically, the number of TANF child-only cases increased from approximately 772,000 cases to approximately 815,000 cases, but the number of families with adults receiving assistance decreased from about 1.5 million to about 800,000 cases (fig. 2). As a result, the share of child-only cases in the overall TANF caseload increased from about 35 percent to about half. There are four main categories of “child-only” cases in which the caregiver (a parent or non-parent) does not receive TANF benefits: (1) the parent is receiving Supplemental Security Income; (2) the parent is a noncitizen or a recent legal immigrant; (3) the child is living with a non-parent caregiver, often a relative; and (4) the parent has been sanctioned and removed from the assistance unit for failing to comply with program requirements, and the family’s benefit has been correspondingly reduced. Families receiving child-only assistance are generally not subject to work requirements. Between fiscal years 2000 and 2008, increases in two of the categories were statistically significant: children living with parents who were ineligible because they received SSI benefits and children living with parents who were ineligible because of their immigration status. Cases in which the parents were ineligible due to immigration status almost doubled and increased from 11 percent of the TANF child-only caseload in fiscal year 2000 to 19 percent in fiscal year 2008 (see fig. 3). This increase of 8 percentage points is statistically significant and represents an increase from about 83,000 in fiscal year 2000 to over 155,000 in fiscal year 2008, with the greatest increase occurring in California. However, in some cases, the relationship between the child and the adult living in the family is not known. The number of these cases decreased significantly over the same period, and it is possible that some of the increase in cases with ineligible parents due to SSI receipt or immigration status resulted from better identification of previously unknown caregivers. However, given available data, we were unable to determine how much of the increase was due to better reporting versus an actual increase in the number of cases. Both the composition of the overall TANF caseload, as well as the composition of the TANF child-only caseload, varies by state. For example, in December 2010, 10 percent of TANF cases in Idaho were single-parent families, compared to almost 80 percent in Missouri. In both of these states, child-only cases comprised the rest of their TANF caseloads. Concerning the variation in child-only cases by state, almost 60 percent of TANF child-only cases in Tennessee included children living with non-parent caregivers, compared to 31 percent in Texas, according to state officials. As the overall number of families receiving TANF cash assistance has declined, so has state spending of TANF funds on cash assistance. TANF expenditures for cash assistance declined from about 73 percent of all expenditures in fiscal year 1997 to 30 percent in fiscal year 2009 (see fig. 4) as states shifted spending to purposes other than cash assistance, which is allowed under the law. States may use TANF funds to provide cash assistance as well as a wide range of services that further the program’s goals, including child care and transportation assistance, employment programs, and child welfare services. While some of this spending, such as that for child care assistance, relates directly to helping current and former TANF cash assistance recipients work and move toward self-sufficiency, other spending is directed to a broader population that did not ever receive TANF cash assistance. Tracking the number of families receiving monthly cash assistance—the traditional welfare caseload—no longer captures the full picture of families being assisted with TANF funds. As states began providing a range of services beyond cash assistance to other low-income families, data collection efforts did not keep pace with the evolving program. Because states are primarily required to report data to HHS on families receiving TANF cash assistance but not other forms of assistance, gaps exist in the information gathered at the federal level to understand who TANF funds are serving and services provided, and to ensure state accountability. For example, with the flexibility allowed under TANF, states have used a significant portion of their TANF funds to augment their child care subsidy programs. However, states are not required to report on all families provided TANF-funded child care, leaving an incomplete picture of the number of children receiving federally funded child care subsidies. Overall, data on the total numbers of families served with TANF funds and how states use TANF funds to help families and achieve program goals in ways beyond their welfare-to-work programs is generally unavailable. When we first reported on these data limitations to this Subcommittee in 2002, we noted that state flexibility to use TANF funds in creative ways to help low-income families has resulted in many families being served who are not captured in the data reported to the federal government. At that time, it was impossible to produce a full count of all families served with TANF funds, and that data limitation continues today. Because job preparation and employment are key goals of TANF, one of the federal measures of state TANF programs’ performance is the proportion of TANF cash assistance recipients engaged in allowable work activities. Generally, states are held accountable for ensuring that at least 50 percent of all families receiving TANF cash assistance participate in one or more of the 12 specified work activities for an average of 30 hours per week. However, before DRA, concerns had been raised about the consistency and comparability of states’ work participation rates and the underlying data on TANF families participating in work activiti Although DRA was generally expected to strengthen TANF work requirements and improve the reliability of work participation data and program integrity by implementing federal definitions of work activities and participation verification requirements, the proportion of families receiving TANF cash assistance who participated in work activities for the required number of hours each week changed little after DRA, as did the types of work activities in which they most frequently participated. Specifically, in fiscal years 2007 through 2009, from 29 to 30 percent of TANF families participated in work activities for the required number of hours, which is similar to the 31 to 34 percent of families who did so in each year from fiscal years 2001 through 2006. Among families that met their work requirements both before and after DRA, the majority participated in unsubsidized employment. The next most frequent work es. activities were job search and job readiness assistance, vocational educational training, and work experience. Although fewer than 50 percent of all families receiving TANF cash assistance participated in work activities for the required number of hours both before and after DRA, many states have been able to meet their work participation rate requirements because of various policy and funding options allowed in federal law and regulations. Specifically, factors that influenced states’ work participation rates included not only the number of families receiving TANF cash assistance who participated in work activities, but also 1. decreases in the number of families receiving TANF cash assistance, 2. state spending on TANF-related services beyond what is required, 3. state policies that allow working families to continue receiving TANF 4. state policies that provide nonworking families cash assistance outside of the TANF program. Beyond families’ participation in the 12 work activities, the factor that states have commonly relied on to help them meet their required work participation rates is the caseload reduction credit. Specifically, decreases in the numbers of families receiving TANF cash assistance over a specified time period are accounted for in each state’s caseload reduction credit, which essentially then lowers the states’ required work participation rate from 50 percent. For example, if a state’s caseload decreases by 20 percent during the relevant time period, the state receives a caseload reduction credit equal to 20 percentage points, which results in the state work participation rate requirement being adjusted from 50 to 30 percent. While state caseload declines have generally been smaller after DRA because the act changed the base year for the comparison from fiscal year 1995 to fiscal year 2005, many states are still able to use caseload declines to help them lower their required work participation rates. For example, in fiscal year 2009, 38 of the 45 states that met their required work participation rates for all TANF families did so in part because of their caseload declines (see fig.5). However, while states’ caseload reduction credits before DRA were based primarily on their caseload declines, after DRA, states’ spending of their own funds on TANF-related services also became a factor in some states’ credits. Specifically, states are required to spend a certain amount of their funds every year in order to receive their federal TANF block grants. However, if states spend in excess of the required amount, they are allowed to correspondingly increase their caseload reduction credits. In fiscal year 2009, 32 of the 45 states that met their required work participation rates for all families receiving cash assistance claimed state spending beyond what is required toward their caseload reduction credits. In addition, 17 states would not have met their rates without claiming these expenditures (see fig. 5). Among the states that needed to rely on excess state spending to meet their work participation rates, most relied on these expenditures to add between 1 and 20 percentage points to their caseload reduction credit s (see fig. 6). As traditional cash assistance caseloads declined and states broadened the types of services provided and the number of families served, existing data collection efforts resulted in an incomplete picture of the TANF program at the national level. In effect, there is little information on the numbers of people served by TANF funds other than cash assistance and no real measure of how services supported by TANF funds meet the goals of welfare reform. This leaves the federal government with underestimates of the numbers served and potentially understated results from these funds. In addition, as before DRA, states have continued to take advantage of the various policy and funding options available to increase their TANF work participation rates. As a result, while measuring work participation of TANF recipients is key to understanding the success of state programs in meeting one of the federal purposes of TANF, whether states met the required work participation rates provides only a partial picture of state TANF programs’ effort and success in engaging recipients in work activities. Although the DRA changes to TANF work requirements were expected to strengthen the work participation rate as a performance measure and move more families toward self-sufficiency, the proportion of TANF recipients engaged in work activities remains unchanged. States’ use of the modifications currently allowed in federal law and regulations, as well as states’ policy choices, have diminished the rate’s usefulness as the national performance measure for TANF, and shown it to be limited as an incentive for states to engage more families in work. Lack of complete information on how states use funds to aid families and to measure work participation hinders decision makers in considering the success of TANF and what trade-offs might be involved in any changes to program requirements. In addressing these issues, care must to be taken to ensure that data requirements are well thought out and do not present an unreasonable burden on state programs. We provided drafts of the reports we drew on for this testimony to HHS for its review, and copies of the agency’s written responses can be found in the appendices of the relevant reports. We also provided HHS a draft of this testimony for technical comments on the new information on child- only TANF cases and updated TANF work participation data. HHS had no technical comments. Chairman Davis and Ranking Member Doggett, and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions you may have. For questions about this statement, please contact Kay E. Brown at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include James Bennett, Rachel Frisk, Alex Galuten, Gale Harris, Jean McSween, and Cathy Roark. The state in this example would have its minimum work participation rate reduced to 26 percent for all TANF families.
The Temporary Assistance for Needy Families (TANF) program, created in 1996, is one of the key federal funding streams provided to states to assist low-income families. A critical aspect of TANF has been its focus on employment and self-sufficiency, and the primary means to measure state efforts in this area has been TANF's work participation requirements. When the Deficit Reduction Act of 2005 (DRA) reauthorized TANF, it also made changes that were generally expected to strengthen these work requirements. Given the impending extension or reauthorization of TANF, this testimony primarily draws on previous GAO work to focus on (1) how the welfare caseload and related spending have changed since TANF was created and (2) how states have met work participation rates since DRA. To address these issues, in work conducted from August 2009 to May 2010, GAO analyzed state data reported to the Department of Health and Human Services (HHS); surveyed state TANF administrators in 50 states and the District of Columbia; conducted site visits to Florida, Ohio, and Oregon, selected to provide geographic diversity and variation in TANF program characteristics; and reviewed relevant federal laws, regulations, and research. In July 2011, GAO updated this work by analyzing state data reported to HHS since that time. In addition, GAO gathered information on caseload changes through its forthcoming work on TANF child-only cases. Between fiscal years 1997 and 2008, the total number of families receiving welfare cash assistance decreased by almost 50 percent. At the same time, there have also been changes in the types of families receiving cash assistance. Specifically, child-only cases--in which the children alone receive benefits--increased from about 35 percent of the overall TANF caseload in 2000 to about half in 2008. As the number of families receiving TANF cash assistance declined, state spending shifted to support purposes other than cash assistance, which is allowed under the law. However, because states are primarily required to report data to HHS on families receiving cash assistance and not on families receiving other forms of aid funded by TANF, this shift in spending has left gaps in the information gathered at the federal level to understand who TANF funds are serving and ensure state accountability. Nationally, the proportion of TANF families who met their work requirements changed little after DRA was enacted, and many states have been able to meet their work participation rate requirements because of various policy and funding options allowed in federal law and regulations. Although federal law generally requires that a minimum of 50 percent of families receiving TANF cash assistance in each state participate in work activities, both before and after DRA, about one-third of TANF families nationwide met these requirements. Nonetheless, many states have been able to meet their required work participation rates because of policy and funding options. For example, states receive a caseload reduction credit, which generally decreases each state's required work participation rate by the same percentage that state caseloads decreased over a specified time period. States can further add to their credits, and decrease their required work rates, by spending their own funds on TANF-related services beyond the amount that is required to receive federal TANF funds. In fiscal year 2009, 7 states met their rates because 50 percent or more of their TANF families participated in work activities for the required number of hours. However, when states' caseload decreases and additional spending were included in the calculation of state caseload reduction credits, 38 other states were also able to meet their required work participation rates in that year.
In December 2007, the United States entered what has turned out to be the deepest recession since the end of World War II. In responding to this downturn, the Recovery Act employs a combination of tax relief and government spending. About one-third of the funds provided by the act are for tax relief to individuals and businesses; one-third is in the form of temporary increases in entitlement programs to aid people directly affected by the recession and provide some fiscal relief to states; and one- third falls into the category of grants, loans, and contracts. As of September 30, 2009, approximately $173 billion, or about 22 percent, of the $787 billion provided by the Recovery Act had been paid out by the federal government. Nonfederal recipients of Recovery Act-funded grants, contracts, and loans are required to submit reports with information on each project or activity, including the amount and use of funds and an estimate of jobs created or retained. Of the $173 billion paid out, about $47 billion—a little more than 25 percent—is covered by this recipient report requirement. Neither individuals nor recipients receiving funds through entitlement programs, such as Medicaid, or through tax programs are required to report. In addition, the required reports cover direct jobs created or retained as a result of Recovery Act funding; they do not include the employment impact on materials suppliers (indirect jobs) or on the local community (induced jobs), as shown in figure 1. To implement the recipient reporting data requirements, OMB has worked with the Recovery Accountability and Transparency Board (Recovery Board) to deploy a nationwide data collection system at www.federalreporting.gov, while the data reported by recipients are available to the public for viewing and downloading on www.recovery.gov (Recovery.gov). OMB’s June 22, 2009, guidance on recipient reporting also includes a requirement for data quality review. Prime recipients have been assigned the ultimate responsibility for data quality checks and the final submission of the data. Because this is a cumulative reporting process, additional corrections can take place on a quarterly basis. The first of the required recipient reports cover cumulative activity since the Recovery Act’s passage in February 2009 through the quarter ending September 30, 2009. As shown in figure 2, OMB specified time frames for different stages in the reporting process: for this current report, prime recipients and delegated subrecipients were to prepare and enter their information from October 1 to October 10; prime recipients were able to review the data for completeness and accuracy from October 11 to October 21, and a federal agency review period began October 22. The final recipient reporting data for the first round of reports were first made available on October 30. To assess the reporting process and data quality efforts, GAO performed an initial set of edit checks and basic analyses on the final recipient report data that first became available at Recovery.gov on October 30, 2009. We built on information collected at the state, local, and program level as part of our bimonthly reviews of selected states’ and localities’ uses of Recovery Act funds. These bimonthly reviews focus on Recovery Act implementation in 16 states and the District of Columbia, which contain about 65 percent of the U.S. population and are estimated to receive collectively about two-thirds of the intergovernmental federal assistance funds available through the Recovery Act. To understand state quality review and reporting procedures, we visited the 16 selected states and the District of Columbia during late September and October 2009 and discussed with prime recipients projects associated with 50 percent of the total funds reimbursed as of September 4, 2009, for that state in the Federal-Aid Highway Program administered by the Department of Transportation (DOT). Prior to the start of the reporting period on October 1, we obtained information on prime recipients’ plans for the jobs data collection process. After the October 10 data reporting period, we went back to see if prime recipients had followed their own plans and subsequently talked with at least two subrecipients to gauge their reactions to the reporting process and assess the documentation they were required to submit. We gathered and examined issues raised by recipients in these jurisdictions regarding reporting and data quality and interviewed recipients on their experiences using the Web site reporting mechanism. During the interviews, we looked at state plans for managing, tracking, and reporting on Recovery Act funds and activities. In a similar way, we examined a nonjudgmental sample of Department of Education (Education) Recovery Act projects at the prime and subrecipient level. We also collected information from selected transit agencies and housing authorities as part of our bimonthly Recovery Act reviews. To gain insight into and understanding of quality review at the federal level, we interviewed federal agency officials who have responsibility for ensuring a reasonable degree of quality across their program’s recipient reports. We assessed the reports from the Inspectors General (IG) on Recovery Act data quality reviews from 15 agencies. We are also continuing to monitor and follow up on some of the major reporting issues identified in the media and by other observers. For example, a number of press articles have discussed concerns with the jobs reporting done by Head Start grantees. According to a Health and Human Services (HHS) Recovery Act official, HHS is working with OMB to clarify the reporting policy as it applies to Head Start grantees. We will be reviewing these efforts as they move forward. For our discussion of how macroeconomic data and methods and recipient reporting together can be used to assess the employment effects of the Recovery Act, we analyzed economic and fiscal data using standard economic principles and reviewed the economic literature on the effect of monetary and fiscal policies for stimulating the economy. We also reviewed the guidance that OMB developed for Recovery Act recipients to follow in estimating the effect of funding activities on employment, reviewed reports that the Council of Economic Advisers (CEA) issued on the macroeconomic effects of the Recovery Act, and interviewed officials from CEA, OMB, and the Congressional Budget Office (CBO). Our work was conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audits to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As detailed in our report, our analysis and fieldwork indicate there are significant issues to be addressed in reporting, data quality, and consistent application of OMB guidance in several areas. Erroneous or questionable data entries. Many entries merit further attention due to an unexpected or atypical data value or relationship between data. Quality review by federal agencies and prime recipients. o Coverage: While OMB estimates that more than 90 percent of recipients reported, questions remain about the other 10 percent. o Review: Over three quarters of the prime reports were marked as having undergone review by a federal agency, while less than 1 percent were marked as having undergone review by the prime recipient Issues in the calculation of full-time equivalents (FTE). Different interpretations of OMB guidance compromise the ability to aggregate the data. We performed an initial set of edit checks and basic analyses on the recipient report data available for download from Recovery.gov on October 30. As part of our review, we examined the relationship between recipient reports showing the presence or absence of any full-time equivalent (FTE) counts with the presence or absence of funding amounts shown in either or both data fields for “amount of Recovery Act funds received” and “amount of Recovery Act funds expended.” Forty-four percent of the prime recipient reports showed an FTE value. However, as shown in table 1, we identified 3,978 prime recipient reports where FTEs were reported but no dollar amount was reported in the data fields for amount of Recovery Act funds received and amount of Recovery Act funds expended. These records account for 58,386 of the total 640,329 FTEs reported. There were also 9,247 reports that showed no FTEs but did show some funding amount in either or both of the funds received or expended data fields. The total value of funds reported in the expenditure field on these reports was $965 million. Those recipient reports showing FTEs but no funds and funds but no FTEs constitute a set of records that merits closer examination to understand the basis for these patterns of reporting. Our review also identified a number of cases in which other anomalies suggest a need for review: discrepancies between award amounts and the amounts reported as received, implausible amounts, or misidentification of awarding agencies. While these occurred in a relatively small number of cases, they indicate the need for further data quality efforts. OMB guidance assigns responsibility for data quality to the prime recipient and provides for federal agency review. A correction could be initiated by either the prime recipient or the reviewing agency. OMB requires that federal agencies perform limited data quality reviews of recipient data to identify material omissions and significant reporting errors and notify the recipients of the need to make appropriate and timely changes to erroneous reports. The prime recipient report records we analyzed included data on whether the prime recipient and the agency reviewed the record in the data quality review time frames. Over three quarters of the prime recipient reports were marked as having undergone federal agency review. Less than 1 percent of the records were marked as having undergone review by the prime recipient. The small percentage reviewed by the prime recipients themselves during the OMB review time frame warrants further examination. While it may be the case that the recipients’ data quality review efforts prior to initial submission of their reports were seen as not needing further revision during the review timeframe, it may also be indicative of problems with the process of noting and recording when and how the prime recipient reviews occur and the setting of the review flag. In addition, the report record data included a flag as to whether a correction was initiated. Overall, slightly more than a quarter of the reports were marked as having undergone a correction during the period of review. In its guidance to recipients for estimating employment effects, OMB instructed recipients to report solely the direct employment effects as “jobs created or retained” as a single number. Recipients are not expected to report on the employment impact on materials suppliers (“indirect” jobs) or on the local community (“induced” jobs). OMB guidance stated that “the number of jobs should be expressed as ‘full-time equivalents (FTEs),’ which is calculated as total hours worked in jobs created or retained divided by the number of hours in a full-time schedule, as defined by the recipient.” Consequently, the recipients are expected to report the amount of labor hired or not fired as result of having received Recovery Act funds. It should be noted that one FTE does not necessarily equate to the job of one person. Organizations may choose to increase the hours of existing employees, for example, which can certainly be said to increase employment but not necessarily be an additional job in the sense of adding a person to the payroll. Problems with the interpretation of this guidance or the calculation of FTEs were one of the most significant problems we found. Jobs created or retained expressed in FTEs raised questions and concerns for some recipients. While reporting employment effects as FTEs should allow for the aggregation of different types of jobs—part-time, full-time, or temporary—and different employment periods, if the calculations are not consistent, the ability to aggregate the data is compromised. One source of inconsistency was variation in the period of performance used to calculate FTEs, which occurred in both the highway and education programs we examined. For example, in the case of federal highways projects, some have been ongoing for six months, while others started in September 2009. In attempting to address the unique nature of each project, DOT’s Federal Highway Administration (FHWA) faced the issue of whether to report FTE data based on the length of time to complete the entire project (project period of performance) versus a standard period of performance, such as a calendar quarter, across all projects. According to FHWA guidance, which was permitted by OMB, FTEs reported for each highway project are expressed as an average monthly FTE. Because FTEs are calculated by dividing hours worked by hours that represent a full-time schedule, a standard period of performance is important if numbers are to be added across programs. As an illustration, take a situation in which one project employed 10 people full time for 1 month, another project employed 10 people full time for 2 months, and a third project employed 10 people full time for 3 months. FHWA’s use of average monthly FTE would result in FTEs being overstated compared either with using OMB’s June 22 guidance or to standardizing the reports for one quarter. Under FHWA’s approach, 30 FTEs would be reported (10 for each of the three projects); on the other hand, using a standardized measure, 20 FTEs would be reported (3-1/3 for the first project, 6-2/3 for the second project, and 10 for the third). Conversely, if a project starts later than the beginning of the reporting period, applying OMB’s June 22 guidance, which requires reporting of FTEs on a cumulative basis, could result in reporting fewer FTEs than would be the case under a standardized reporting period approach. In either case, failure to standardize on a consistent basis prevents meaningful comparison or aggregation of FTE data. This was also an issue for education programs. For example, in California, two higher education systems calculated FTE differently. In the case of one, they chose to use a 2-month period as the basis for the FTE performance period. The other chose to use a year as the basis for the FTE. The result is almost a three-to-one difference in the number of FTEs reported for each university system in the first reporting period. Although Education provides alternative methods for calculating an FTE, in neither case does the guidance explicitly state the period of performance of the FTE. OMB’s decision to convert jobs into FTEs provides a consistent lens to view the amount of labor being funded by the Recovery Act, provided each recipient uses a standard time frame in calculating the FTE. The current OMB guidance, however, creates a situation where, because there is no standard starting or ending point, an FTE provides an estimate for the life of the project. Without normalizing the FTE, aggregate numbers should not be considered, and the issue of a standard period of performance is magnified when looking across programs and across states. Recipients were also confused about counting a job created or retained even though they knew the number of hours worked that were paid for with Recovery Act funds. While OMB’s guidance explains that in applying the FTE calculation for measuring the number of jobs created or retained recipients will need the total number of hours worked that are funded by the Recovery Act, it could emphasize this relationship more thoroughly throughout its guidance. While there were problems of inconsistent interpretation of the guidance, the reporting process went relatively well for highway projects. DOT had an established procedure for reporting prior to enactment of the Recovery Act. As our report shows, in the cases of Education and the Department of Housing and Urban Development, which do not have this prior reporting experience, we found more problems. State and federal officials are examining identified issues and have stated their intention to deal with them. In our report, we make a number of recommendations to OMB to improve the consistency of FTE data collected and reported. OMB should continue to work with federal agencies to increase recipient understanding of the reporting requirements and application of the guidance. Specifically, OMB should clarify the definition and standardize the period of measurement for FTEs and work with federal agencies to align this guidance with OMB’s guidance and across agencies; given its reporting approach, consider being more explicit that “jobs created or retained” are to be reported as hours worked and paid for with Recovery Act funds; and continue working with federal agencies and encourage them to provide or improve program-specific guidance to assist recipients, especially as it applies to the full-time equivalent calculation for individual programs. Given some of the issues that arose in our review of the reporting process and data, we also recommend that OMB should work with the Recovery Board and federal agencies to re-examine review and quality assurance processes, procedures, and requirements in light of experiences and identified issues with this round of recipient reporting and consider whether additional modifications need to be made and if additional guidance is warranted. In commenting on a draft of our report, OMB staff told us that OMB generally accepts the report’s recommendations. It has undertaken a lessons-learned process for the first round of recipient reporting and will generally address the report’s recommendations through that process. As recipient reporting moves forward, we will continue to review the processes that federal agencies and recipients have in place to ensure the completeness and accuracy of data, including reviewing a sample of recipient reports across various Recovery Act programs to assure the quality of the reported information. As existing recipients become more familiar with the reporting system and requirements, these issues may become less significant; however, communication and training efforts will need to be maintained and in some cases expanded as new recipients of Recovery Act funding enter the system. In addition to our oversight responsibilities specified in the Recovery Act, we are also reviewing how several federal agencies collect information and provide it to the public for selected Recovery Act programs, including any issues with the information’s usefulness. Our subsequent reports will also discuss actions taken on the recommendations in this report and will provide additional recommendations, as appropriate. While the recipient reports provide a real-time window on the use and results of Recovery Act spending, the data will represent only a portion of the employment effect, even after data quality issues are addressed. A fuller picture of the employment effect would include not only the direct jobs reported but also the indirect and induced employment gains resulting from government spending. In addition, the entitlement spending and tax benefits included in the Recovery Act also create employment. Therefore, both the data reported by recipients and other macroeconomic data and methods are helpful in gauging the overall employment effects of the stimulus. Economists will use statistical models to estimate a range of potential effects of the stimulus program on the economy. In general, the estimates are based on assumptions about the behavior of consumers, business owners, workers, and state and local governments. Neither the recipients nor analysts can identify with certainty the impact of the Recovery Act because of the inability to compare the observed outcome with the unobserved, counterfactual scenario (in which the stimulus does not take place). At the level of the national economy, models can be used to simulate the counterfactual, as CEA and others have done. At smaller scales, comparable models of economic behavior either do not exist or cover only a very small portion of all the activity in the macroeconomy. Our report discusses a number of the issues that are likely to affect the impact of the Recovery Act, including the potential effect of different types of stimulus. We also discuss state and sectoral employment trends and that the impact of the Recovery Act will vary across states. The employment effects of Recovery Act funds are likely to vary with the condition of a state’s labor market, as measured by its unemployment rate. Labor markets in every state weakened over the course of the recession, but the degree to which this has occurred varies widely across states. Figure 3 illustrates this—it shows the geographic distribution of the magnitude of the recession’s impact on unemployment as measured by the percentage change in unemployment between December 2007 and September 2009. The impact of funds allocated to state and local governments will also likely vary with states’ fiscal conditions. Finally, let me provide the committee with an update on allegations of fraud, waste, and abuse made to our FraudNet site. As of November 13, 2009, FraudNet has received 106 Recovery Act–related allegations that were considered credible enough to warrant further review. We referred 33 allegations to the appropriate agency Inspectors General for further review and investigation. Our Forensic Audits and Special Investigations unit is actively pursuing 8 allegations, which include wasteful and improper spending; conflicts of interest; and grant, contract, and identity fraud. Another 9 are pending further review by our criminal investigators, and 15 were referred to other GAO teams for consideration in their ongoing work. We will continue to monitor these referrals and will inform the committee when outstanding allegations are resolved. The remaining 41 allegations were found not to address waste, fraud, or abuse; lacked specificity; were not Recovery Act-related; or reflected only a disagreement with how Recovery Act funds are being disbursed. We consider these allegations to be resolved and no further investigation is necessary. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the report being issued today on the first set of recipient reports made available in October 2009 in response to the American Recovery and Reinvestment Act's section 1512 requirement. On October 30, Recovery.gov (the federal Web site on Recovery Act spending) reported that more than 100,000 recipients had reported hundreds of thousands of jobs created or retained. GAO is required to comment quarterly on the estimates of jobs created or retained as reported by direct recipients of Recovery Act funding from federal agencies. In the first quarterly GAO report, being released today, we address the following issues: (1) the extent to which recipients were able to fulfill their reporting requirements and the processes in place to help ensure recipient reporting data quality and (2) how macroeconomic data and methods, and the recipient reports, can be used to help gauge the employment effects of the Recovery Act. Because the recipient reporting effort will be an ongoing process of cumulative reporting, our review represents a snapshot in time. At this juncture, given the national scale of the recipient reporting exercise and the limited time frames in which it was implemented, the ability of the reporting mechanism to handle the volume of data from a wide variety of recipients represents a solid first step in moving toward more transparency and accountability for federal funds; however, there is a range of significant reporting and quality issues that need to be addressed. Consequently, our report contains several recommendations to improve data quality that Office of Management and Budget (OMB) staff generally agreed to implement. We will continue to review the processes that federal agencies and recipients have in place to ensure the future completeness and accuracy of data reported. Finally, our report notes that because the recipient reports cover about one-third of Recovery Act funds, both the data in those reports and other macroeconomic data and methods together can offer a more complete view of the overall employment impact of the Recovery Act. As detailed in our report, our analysis and fieldwork indicate there are significant issues to be addressed in reporting, data quality, and consistent application of OMB guidance in several areas. Many entries merit further attention due to an unexpected or atypical data value or relationship between data. As part of our review, we examined the relationship between recipient reports showing the presence or absence of any full-time equivalent (FTE) counts with the presence or absence of funding amounts shown in either or both data fields for "amount of Recovery Act funds received" and "amount of Recovery Act funds expended." Forty-four percent of the prime recipient reports showed an FTE value. However,we identified 3,978 prime recipient reports where FTEs were reported but no dollar amount was reported in the data fields for amount of Recovery Act funds received and amount of Recovery Act funds expended. These records account for 58,386 of the total 640,329 FTEs reported. While OMB estimates that more than 90 percent of recipients reported, questions remain about the other 10 percent. Less than 1 percent of the records were marked as having undergone review by the prime recipient. The small percentage reviewed by the prime recipients themselves during the OMB review time frame warrants further examination. While it may be the case that the recipients' data quality review efforts prior to initial submission of their reports were seen as not needing further revision during the review timeframe, it may also be indicative of problems with the process of noting and recording when and how the prime recipient reviews occur and the setting of the review flag. In addition, the report record data included a flag as to whether a correction was initiated. Overall, slightly more than a quarter of the reports were marked as having undergone a correction during the period of review. In its guidance to recipients for estimating employment effects, OMB instructed recipients to report solely the direct employment effects as "jobs created or retained" as a single number. Problems with the interpretation of this guidance or the calculation of FTEs were one of the most significant problems we found. Jobs created or retained expressed in FTEs raised questions and concerns for some recipients. One source of inconsistency was variation in the period of performance used to calculate FTEs, which occurred in both the highway and education programs we examined. While there were problems of inconsistent interpretation of the guidance, the reporting process went relatively well for highway projects. DOT had an established procedure for reporting prior to enactment of the Recovery Act. As our report shows, in the cases of Education and the Department of Housing and Urban Development, which do not have this prior reporting experience, we found more problems. State and federal officials are examining identified issues and have stated their intention to deal with them.
We have been reporting on the department’s financial management as an area of high risk since 1995. As discussed in our recent report on the results of our review of the fiscal year 2000 Financial Report of the U.S. Government, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. To date, none of the military services or major DOD components have passed the test of an independent financial audit because of pervasive weaknesses in financial management systems, operations, and controls. These weaknesses not only hamper the department’s ability to produce timely and accurate financial management information, but also make the cost of carrying out missions unnecessarily high. Ineffective asset accountability and the lack of effective internal controls continue to adversely affect visibility over its estimated $1 trillion investment in weapon systems and inventories. Such information is key to meeting military objectives and readiness goals. Further, unreliable cost and budget information related to nearly a reported $1 trillion of liabilities and about $347 billion of net costs negatively affects DOD’s ability to effectively measure performance, reduce costs, and maintain adequate funds control. As the results of the department’s fiscal year 2000 financial audit and other recent auditors’ reports demonstrate, DOD continues to confront serious weaknesses in the following areas. Budget execution accounting. The department was unable to reconcile an estimated $3.5 billion difference between its available fund balances according to its records and Treasury’s at the end of fiscal year 2000— similar in concept to individuals reconciling their checkbooks with their bank statements. In addition, the department made frequent adjustments of recorded payments between appropriation accounts, including adjustments to cancelled appropriation accounts of at least $2.7 billion during fiscal year 2000. In addition, a number of obligations were incorrect or unsupported. For example, auditors found that $517 million of the $891 million in recorded Air Force fiscal year 2000 obligations tested were not supported. Further, the department could not fully and accurately account for an estimated $1.8 billion of transactions that were held in suspense accounts at the end of fiscal year 2000. The net effect of DOD’s problems in this area is that it does not know with certainty the amount of funding it has available. Until the department can effectively reconcile its available fund balances and Treasury’s, ensure that payments are posted to the correct appropriation accounts, and post amounts held in suspense accounts to the proper appropriation accounts, the department will have little assurance that reported appropriation balances are correct. Such information is essential for DOD and the Congress to determine if funds are available that could be used to reduce current funding requirements or that could be reprogrammed or transferred to meet other critical program needs. Environmental and disposal liabilities. The amounts of environmental and disposal liabilities the department has reported over the last few years has varied by tens of billions of dollars—from $34 billion in fiscal year 1998, up to $80 billion in fiscal year 1999, and down to $63 billion in fiscal year 2000. However, these reported amounts potentially excluded billions of dollars of future liabilities associated with DOD’s non-nuclear weapons; conventional munitions; training ranges; and other property, plant and equipment—such as landfills. For example, we recently reported that while DOD reported a fiscal year 2000 liability of $14 billion associated with its environmental cleanup of training ranges, other DOD estimates show that this liability could exceed $100 billion. Obtaining reliable estimates of the department’s environmental liability is an important factor for DOD managers and oversight officials to consider with respect to the likely timing of related funding requests and DOD’s ability to carry out its environmental cleanup and disposal responsibilities. Asset accountability. DOD has continued to experience problems in properly accounting for and reporting on its weapon systems and support equipment. Material weaknesses continue in the central systems DOD relies on to maintain visibility over assets critical to meeting military objectives and readiness goals. For example, in fiscal year 1999, auditors found that Army’s central visibility system excluded information on 56 airplanes, 32 tanks, and 36 Javelin command-launch units. Auditors’ fiscal year 2000 financial audit testing showed that previously identified problems in the systems and processes that DOD relied on to account for and control its large investment in weapon systems had not yet been corrected. In addition, DOD’s inability to account for and control its huge investment in inventories has been an area of major concern for many years. For example, auditors’ fiscal year 2000 reviews revealed that (1) Army did not perform required physical counts for wholesale munitions with an estimated value of $14 billion and (2) central accountability and visibility records at four Army test facilities excluded data on about 62,000 missiles, rockets, and other ammunition items that were on hand. In addition, physical counts at the Defense Logistics Agency’s 20 distribution depots showed that none of the depots achieved the department’s goal of 95 percent inventory record accuracy—with error rates ranging from 6 to 26 percent. As a result of continuing problems in this area, the department continues to spend more than necessary to procure inventory and at the same time, experience equipment readiness problems because of the lack of key spare parts. For example, we reported that because of long-standing weaknesses in controls over shipments, the department’s inventories are at high risk for undetected loss and theft. At the same time, and for a number of years, insufficient spare parts have been recognized as a major contributor to aircraft performing at lower mission capable rates than expected. Our recent reporting disclosed that inaccurate, inconsistent, and missing pricing data for weapon system spare parts undermined military units’ ability to buy needed spare parts. Net cost information unreliable. A continuing inability to capture and report the full cost of its programs represents one of the most significant impediments facing the department. DOD does not yet have the systems and processes in place to capture the required cost information from the hundreds of millions of transactions it processes each year. Consequently, while DOD reported $347 billion in total net costs for its fiscal year 2000 operations, it was unable to support this amount. The lack of reliable, cost-based information hampers DOD across nearly all its programs and operations. For example, recent reporting highlights the adverse impact the lack of such information has had on the department’s studies conducted under Office of Management and Budget (OMB) Circular A-76 and its performance measurement and cost reduction efforts. For example, in December 2000, we reported that our review of DOD functions that were studied over the past 5 years for potential outsourcing under OMB Circular A-76 showed that while DOD reported that savings had occurred as a result of these studies, we could not determine the precise amounts of any such savings because the department lacks actual cost data. Lacking complete and accurate overall life-cycle cost information for weapon systems impairs DOD and congressional decisionmakers’ ability to make fully informed judgments on funding comparable weapon systems. DOD has acknowledged that the lack of a cost accounting system is the single largest impediment to controlling and managing weapon system costs, including the cost of acquiring, managing, and disposing of weapon systems. In addition, the measures used in the department’s reporting under the Government Performance and Results Act (GPRA) often did not address the cost-based efficiency aspect of performance, making it difficult for DOD to fully assess the efficiency of its performance. For example, we reported that while DOD’s performance plan for 2001 included 45 unclassified metrics, few metrics contained efficiency measures based on costs. Financial management systems. DOD lacks integrated, transaction- driven, double entry accounting systems that are necessary to properly control assets and control costs. DOD has acknowledged that, overall, its reported network of 167 critical financial management systems does not comply with the Federal Financial Management Improvement Act’s federal financial management systems requirements. DOD’s transaction processing, using a large network of systems relied on to carry out its financial management operations, is overly complex and error-prone. Each of the military services continue to operate many stand- alone, nonstandard financial processes and systems. As a result, millions of transactions must be manually keyed and rekeyed into the vast number of systems involved in any given DOD business process. To further complicate processing, transactions must be recorded using a coding structure that, as illustrated in the following figure, can exceed 50 digits. DOD uses such coding—which according to DOD can exceed 75 digits—to accumulate appropriation, budget, and management information for contract payments. In addition, such accounting coding often differs—in terms of type, quantity, and format of data required—by military service and fund type. As a result, financial accountability is lacking and financial management information available for day-to-day decision-making is poor. Weak systems and controls leave the department vulnerable to fraud and improper payments. For example, DOD continues to overpay contractors. Although the full extent of overpayments is not known, the department has an annual budget for purchases involving contractors of over $130 billion. In October 2000, we reported that of the $3.6 billion DOD reported in its fiscal year 1999 financial statements as uncollected debt related to a variety of contract payment problems, at least $225 million represented improper payments, including duplicate payments, overpayments, and payments for goods not received. Without effective controls over this important area, DOD will continue to risk erroneously paying contractors millions of dollars and incur additional, unnecessary costs to collect amounts owed from contractors. DOD has initiated a number of departmentwide reform initiatives to improve its financial operations as well as other key business support processes. These initiatives have produced some incremental improvements, but have not resulted in the fundamental reform necessary to resolve these long-standing management challenges. The underlying causes for the department’s inability to resolve its long- standing financial management problems, as well as the other areas of its operations most vulnerable to waste, fraud, abuse, and mismanagement were first identified in our May 1997 testimony. These conditions remain largely unchanged today. Specifically, we believe the underlying reasons for the department’s inability to put fundamental reforms of its business operations in place are a lack of top-level leadership and management accountability for cultural resistance to change, including service parochialism and a lack of results-oriented goals and performance measures and monitoring; and inadequate incentives for seeking change. Lack of leadership and accountability. DOD has not routinely established accountability for performance to specific organizations or individuals that have sufficient authority to accomplish desired goals. For example, under the CFO Act, it is the responsibility of agency CFOs to establish the mission and vision for the agency’s future financial management. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The department has learned through its efforts to meet the Year 2000 computing challenge that to be successful, major improvement initiatives must have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense. Such top- level support helps guarantee that daily activities throughout the department remain focused on achieving shared, agency-wide outcomes. DOD experience has suggested that top management has not had a proactive, consistent, and continuing role in building capacity, integrating daily operations for achieving performance goals, and in creating incentives. Sustaining top management commitment to performance goals is a particular challenge for DOD. In the past, a turnover rate among the department’s top political appointees of 1.7 years hindered long-term planning and follow-through. Cultural resistance and parochialism. Cultural resistance to change and service parochialism have also played a significant role in impeding DOD management reforms. DOD has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization, and that many of these practices were developed piecemeal and evolved to accommodate different organizations, each with its own policies and procedures. For example, as discussed in our July 2000 report, the department has encountered resistance to developing departmentwide solutions under the Secretary’s broad-based Defense Reform Initiative (DRI). The department established a Defense Management Council—including high-level representatives from each of the military services—which was intended to serve as the “board of directors” to help break down organizational stovepipes and overcome cultural resistance to changes called for under DRI. However, we found that the council’s effectiveness was impaired because members were not able to put their individual military services’ or DOD agencies’ interests aside to focus on departmentwide approaches to long-standing problems. We have also seen an inability to put aside parochial views and cultural resistance to change impeding reforms in the department’s weapon system acquisition and inventory management areas. For example, as we recently reported, while the individual military services conduct considerable analyses justifying major acquisitions, these analyses can be narrowly focused and do not consider joint acquisitions with the other services. In the inventory management area, DOD’s culture has supported buying and storing multiple layers of inventory rather than managing with just the amount of stock needed. Unclear goals and performance measures. Further, DOD’s reform efforts have been handicapped by the lack of clear, hierarchically linked goals and performance measures. As a result, DOD managers lack straightforward road maps showing how their work contributes to attaining DOD’s strategic goals, and they risk operating autonomously rather than collectively. In some cases, DOD had not yet developed appropriate strategic goals, and in other cases, its strategic goals and objectives were not linked to those of the military services and defense agencies. As part of our assessment of DOD’s Fiscal Year 1999 Performance Report, we reported that it did not include goals or measures for addressing its contracting challenge, and it was not clear whether the department had achieved identified key program outcomes. The department’s 1999 performance report did not provide any information on whether DOD is achieving any reduction in the important area of erroneous payments to contractors nor did it provide any cost-based measures for whether the department had achieved its desired outcome of putting in place a more efficient and cost-effective infrastructure and associated operating procedures. Many of the department’s business processes in operation today are mired in old, inefficient processes and systems, many of which are based on 1950s and 1960s technology. The department faces a formidable challenge in responding to technological advances that are changing traditional approaches to business management as it moves to modernize its systems. For fiscal year 2000, DOD reported total information technology investments of over $21 billion supporting a wide range of military operations as well as its business functions, including an estimated $7.6 billion in major information system projects. While DOD plans to invest billions of dollars in modernizing its financial management and other business support systems, it does not yet have an overall blueprint— or enterprise architecture—in place to guide and direct these investments. Lack of incentives for change. The final underlying cause of the department’s inability to carry out needed fundamental reform is the lack of incentives for making more than incremental change to existing “business as usual” processes, systems, and structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs produced. DOD generally measures its performance by the amount of money spent, people employed, or number of tasks completed. Incentives for DOD decisionmakers to implement changed behavior have been minimal or nonexistent. This underlying problem has perhaps been most evident in the department’s acquisition area. In DOD’s culture, the success of a manager’s career has depended more on moving programs and operations through the DOD process rather than on achieving better program outcomes. The fact that a given program may have cost more than estimated, took longer to complete, and did not generate results or perform as promised is secondary to fielding a new program. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide and congressional goals, (2) develop incentives that motivate decisionmakers to initiate and implement efforts that are consistent with better program outcomes, and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource allocation decisions. The new Secretary of Defense has stated that he intends to include financial management reform among his top priorities. The Secretary faces a monumental task in putting in place such a fundamental reform. The size and complexity of DOD’s operations are unparalleled. DOD is not only responsible for an estimated $1 trillion in assets and liabilities, but also for supporting personnel on an estimated 500 bases in 137 countries and territories throughout the world. It has also estimated that it makes $24 billion in monthly disbursements, and that in a given fiscal year, the department may have as many as 500 or more active appropriations. Given the unparalleled nature of DOD’s operations, combined with its deeply entrenched financial management weaknesses, it will not be possible to fully resolve these problems overnight. Changing how DOD carries out its financial management operations is going to be tough work. Going forward, various approaches could be used to address the underlying causes of DOD’s financial management challenges. But, consistent with our previous testimony before your subcommittee, as well as the results of our survey of world-class financial management organizations and other recent reviews, there are several elements that will be key to any successful approach to reform address the department’s financial management challenges as part of a comprehensive, integrated, DOD-wide business process reform; provide for active leadership by the Secretary of Defense and resource control to implement needed financial management reforms; establish clear lines of responsibility, authority, and accountability for such reform tied to the Secretary; incorporate results-oriented performance measures tied to financial management reforms; provide appropriate incentives or consequences for action or inaction; establish an enterprisewide architecture to guide and direct financial management modernization investments; and ensure effective oversight and monitoring. Integrated business process reform strategy. As we have reported in the past, establishing the right goal is essential for success. Central to effectively addressing DOD’s financial management problems will be the recognition that they cannot be addressed in an isolated or piecemeal fashion separate from the other major management challenges and high- risk areas facing the department. Successfully reengineering the department’s processes supporting its business operations will be critical if DOD is to effectively address deep-rooted organizational emphasis on maintaining “business as usual” across the department. Financial management is a crosscutting issue that affects virtually all of DOD’s business processes. For example, improving its financial management operations so that they can produce useful, reliable, and timely cost information will be essential if the department is to effectively measure its progress toward achieving many key outcomes and goals across virtually the entire spectrum of DOD’s business operations. At the same time, the department’s financial management problems—and, most importantly, the keys to their resolution—are deeply rooted in and dependent upon developing solutions to a wide variety of management problems across DOD’s various organizations and business functions. The department has reported that an estimated 80 percent of the data needed for sound financial management comes from the department’s other business operations, such as its acquisition and logistics communities. DOD’s vast array of costly, non-integrated, duplicate, inefficient financial management systems is reflective of the lack of an enterprise wide, integrated approach to addressing its management challenges. DOD has acknowledged that one of the reasons for the lack of clarity in its reporting under the GPRA was that most of the program outcomes the department is striving to achieve are interrelated. Active leadership and resource control. The department’s successful Year 2000 effort illustrated and our survey of leading financial management organizations captured the importance of strong leadership from top management. As we have stated many times before, strong, sustained executive leadership is critical to changing a deeply rooted corporate culture—such as the existing “business as usual” culture at DOD—and successfully implementing financial management reform. The personal, active involvement of the Deputy Secretary of Defense played an important role in building entity wide support for the department’s Year 2000 initiatives. Given the long-standing and deeply entrenched nature of the department’s financial management problems combined with the numerous competing DOD organizations, each operating with varying and often parochial views and incentives, such visible, sustained top-level leadership will be critical. Clear lines of responsibility and accountability. Establishing clear lines of responsibility, decision-making authority, and resource control for actions across the department tied to the Secretary will also be a key to reform. As we reported with respect to the department’s implementation of its DRI, such an accountability structure should emanate from the highest levels and include the secretaries of each of the military services as well as heads of the department’s various business areas. Results-oriented performance. As discussed in our report on DOD’s major performance and accountability challenges, establishing a results- orientation will be another key element of any approach to reform. Such an orientation should draw upon results that could be achieved through commercial best practices, including outsourcing and shared servicing concepts. Personnel throughout the department must share the common goal of establishing financial management operations that not only produce financial statements that can withstand the test of an audit but, more importantly, also routinely generate useful, reliable, and timely financial information for day-to-day management purposes. In addition, we have previously testified that DOD’s financial management improvement efforts should be measured against an overall goal of effectively supporting DOD’s basic business processes, including appropriately considering related business process system interrelationships, rather than determining system-by-system compliance. Such a results-oriented focus is also consistent with an important lesson learned from the department’s Year 2000 experience. DOD’s initial Year 2000 focus was geared toward ensuring compliance on a system-by-system basis and did not appropriately consider the interrelationship of systems and business areas across the department. It was not until the department shifted to a core mission and function review approach that it was able to achieve the desired result—greatly reducing its Year 2000 risk. Incentives and consequences. Another key to breaking down parochial interests and stovepiped approaches that have plagued previous reform efforts will be establishing mechanisms to reward organizations and individuals for behaviors that comply with DOD-wide and congressional goals. Such mechanisms should provide appropriate incentives and penalties to motivate decisionmakers to initiate and implement efforts that result in fundamentally reformed financial management operations. Enterprise architecture. Establishing an enterprise wide financial management architecture will be essential for the department to effectively manage its large, complex system modernization effort now underway. As we testified last year, the Clinger-Cohen Act requires agencies to develop and maintain an integrated system architecture. Such an architecture can help ensure that the department invests only in integrated, enterprise wide business system solutions, and conversely, will help move resources away from non-value added legacy business systems and nonintegrated business system development efforts. In addition, without an architecture, DOD runs the serious risk that its system efforts will result in perpetuating the existing system environment that suffers from systems duplication, limited interoperability, and unnecessarily costly operations and maintenance. In a soon to be issued report, we point out that DOD lacks a financial management enterprise architecture to guide and constrain the billions of dollars it plans to spend to modernize its financial management operations and systems. Monitoring and oversight. Ensuring effective monitoring and oversight of progress will also be a key to bringing about effective implementation of the department’s financial management and related business process reform. We have previously testified that periodic reporting of status information to OMB, the Congress, and the audit community was another key lesson learned from the department’s successful effort to address its Year 2000 challenge. Finally, this Subcommittee’s annual oversight hearings, as well the active interest and involvement of other cognizant Defense committees, will continue to be key to effectively achieving and sustaining DOD’s financial management and related business process reform milestones and goals. In closing, while DOD has made incremental improvement, it has a long way to go to address its long-standing, serious financial management weaknesses as part of a comprehensive, integrated reform of the department’s business support operations. Such an overhaul must include not only DOD’s financial management and other management challenges, but also its high-risk areas of information technology and human capital management. Personnel throughout the department must share the common goal of reforming the department’s business support structure.
The results of the Defense Department (DOD) financial audit for fiscal year 2000 highlight long-standing financial management weaknesses that continue to plague the military. These weaknesses not only hamper the department's ability to produce timely and accurate financial management information but also unnecessarily increase the cost of carrying out its missions. Although DOD has made incremental improvement, it has a long way to go to overcome its long-standing, serious financial management weaknesses as part of a comprehensive, integrated reform of the department's business support operations. Such an overhaul must include not only DOD's financial management and other management challenges but also its high-risk areas of information technology and human capital management. Personnel throughout the department must share the common goal of reforming the department's business support structure. Without reengineering, DOD will have little chance of radically improving its cumbersome and bureaucratic processes.
The 2006 Act requires VA to establish annual contracting goals for SDVOSBs and other nonservice-disabled VOSBs; the goal for SDVOSBs must at least match the government-wide SDVOSB contracting goal of 3 percent of federal contract dollars. VA set its goals at 3 percent for SDVOSBs and 7 percent for all VOSBs (SDVOSBs and other VOSBs) for fiscal years 2006 and 2007, and subsequently has set and achieved increasing contracting goals for VOSBs and SDVOSBs, as shown in figure 1. VA’s total contracting awards to VOSBs increased from $616 million (including $356 million to SDVOSBs) in fiscal year 2006 to $3.6 billion (including $3.2 billion to SDVOSBs) in fiscal year 2011. VA’s OSDBU has overall responsibility for the SDVOSB/VOSB verification program. Within OSDBU, the Center for Veterans Enterprise (CVE), maintains a database of eligible SDVOSBs and VOSBs and is responsible for verification operations, such as processing applications. To implement the requirements of the 2006 Act, VA began verifying businesses in May 2008 under interim final rules, which the agency did not finalize until February 2010.the verification program, see fig. 2.) To be eligible for verification under VA’s rules (For a timeline of major events affecting the small business concern must be unconditionally owned and controlled by one or more eligible parties (veterans, service-disabled veterans, or surviving spouses); the owners of the small business must have good character (any small business owner or concern that has been debarred or suspended is ineligible); the applicant cannot knowingly make false statements in the application process; the firm and its eligible owners must not have significant financial obligations owed to the federal government; and the firm must not have been found ineligible due to an SBA protest decision. VA launched its verification process under the 2006 Act in 2008 and shifted to a more robust process in 2010. VA’s verification process under the 2006 Act (2006 process) initially consisted of (1) checking VA databases to confirm veteran status and, if applicable, service-disability status; and (2) reviewing publicly available, primarily self-reported information about control and ownership for all businesses that applied for verification. Beginning in September 2008, VA also adopted a risk-based approach to conducting site visits or other means, such as additional document reviews and telephone interviews, to further investigate selected high-risk businesses. VA adopted a more thorough verification process in 2010 (2010 process), which included reviewing and analyzing a standardized set of documents that each applicant is required to submit. VA refined the 2010 process over time so that, as of October 2012, the verification process consisted of four phases—initiation, examination, evaluation, and determination. Denied applicant firms are able to request a reconsideration of the denial decision. Initiation: CVE employees are to confirm that applicants meet minimum requirements for the program by, among other things, verifying the owners’ veteran and service-disability status and determining that they have submitted all of the required documents or adequate explanations for missing documents.to check the Excluded Parties List System to ensure that the applicant business and all owners are not on the list. Examination: Contractors are to review completed applications to determine whether firms meet the eligibility requirements and make an initial recommendation for approval, denial, or additional review (i.e., a site visit). Evaluation: Contractors and staff are to review the initial recommendations to ensure that the screening has met quality standards and that firms have received an appropriate recommendation. They may decide as well that a site visit is necessary. Contractors are to conduct site visits if they are recommended, and CVE employees are to recommend approval or denial. Determination: CVE supervisors are to review staff recommendations and issue eligibility decisions. A determination letter is to be emailed to the applicant, and approved companies appear as verified in the Vendor Information Pages (VIP) database. Request for Reconsideration: Through an optional Request for Reconsideration process, denied applicants can remedy the issue(s) that caused their applications to be initially denied. Based on a review by staff from VA’s Office of General Counsel, VA may approve the application, deny it on the same grounds as the original decision, or deny it on other grounds. If VA denies a request for reconsideration solely on issues not raised in the initial denial, the applicant may ask for reconsideration as if it were an initial denial. Denied applicants can also request a legal review if they believe their application was denied in error. VA’s database of SDVOSBs and VOSBs previously listed unverified and verified firms, but currently is required to list only verified firms, as a result of the Veterans Small Business Verification Act (2010 Act), part of the Veterans’ Benefits Act of 2010.2008, VA modified its VIP database of self-certified SDVOSBs and After the verification program began in VOSBs to receive verification applications and publicly display names of verified firms. Once VA approved a business, the business name appeared with a verified logo in VIP, but the database continued to display self-certified firms as well. The 2010 Act requires that no new applicant appear in the VIP database unless it has been verified by VA as owned and controlled by a veteran or service-disabled veteran. The 2010 Act also required VA, within 60 days of enactment, to notify all unverified (self-certified) firms in VIP about the verification requirement and also required firms to apply for verification within 90 days or be removed from the database. VA officials reported that by September 2011, the agency had removed from the VIP database all firms that had self-certified so that the database would include only verified firms. As of October 2012, the database included both firms verified under the 2006 Act process and the 2010 process. VA has made significant changes to its verification processes in an effort to improve its operations and address program weaknesses, but continues to face challenges in establishing a stable and efficient program to verify firms on a timely and consistent basis. Since December 2011, VA has instituted a number of significant operational changes, including revising standard operating procedures and enhancing quality assurance protocols. However, it has not had a comprehensive, long-term strategic plan for the verification program and has consistently prioritized addressing immediate operational challenges, contributing to programmatic inefficiencies. In response to our observations, VA’s OSDBU initiated action in late October 2012 to compile a strategic planning document that encompasses the verification program. OSDBU appears to have at least partially applied key leading strategic planning practices in its initial planning effort. But the plan lacked performance measures to assess whether the desired outcomes are being achieved and had a shorter-term focus than typically associated with a strategic plan. Furthermore, VA had not shared the plan with key stakeholders, such as veteran support organizations and business associations or congressional staff. In addition, the verification program’s information technology (IT) system has shortcomings that have hindered VA’s ability to operate, oversee, and monitor the program. VA is planning to modify or replace the system, but has not directly tied this effort into its long-term strategic planning efforts to ensure that the new system meets the verification program’s long-term information needs. As of September 30, 2012, the VIP database listed 6,257 firms that had been verified as VOSBs or SDVOSBs. Of these, VA reported that 1,733 were verified under the initial 2006 Act process and 4,524 under the more rigorous 2010 process. VA’s database also listed a substantial number of pending cases at that time: 691 new applications for verification, 131 firms seeking reverification to remain in VIP, and 165 requests for reconsideration from firms that were denied verification. See appendix II for additional data on the verification program as of September 30, 2012. GAO-12-697. process that VA previously used represented a continuing vulnerability. By September 30, 2012, VA had reverified or removed from the VIP database 622 of the firms that were originally verified under the 2006 Act process, but had yet to reverify the remaining 1,733 firms, according to VA’s inventory of verified firms. The inventory indicated that the 2-year verification period for 1,159 of these remaining firms expired on or before September 30, 2012, so they were not eligible to receive VA contracts after that date and were due to be removed from the database. VA officials said that firms whose 2-year verification period had not yet expired would be removed from the database upon expiration if they had not been verified under the 2010 process. According to VA, fewer than 120 companies that were verified under the 2006 process remained in VIP as of December 1, 2012. In interviews with us between April 2012 and June 2012, veterans’ organizations cited applicant concerns about other aspects of the verification program, such as the rationale for determinations and the time it took VA to make determinations, as the following examples illustrate. We observed several outreach sessions that VA conducted in May and June 2012 with veterans’ organizations and an association of organizations that provided technical assistance with procurement. In these sessions and in our follow-up interviews with participants, the organizations stated that VA’s guidance for applicants did not always adequately explain how VA interpreted some of the subjective eligibility standards in its regulations, such as the requirements that owners have good character. They also said that they and applicants sometimes found the rationale for denial to be unclear or inconsistent. Representatives from two of the veteran service organizations that we interviewed also raised concerns about the length of time it could take to process an application. According to VA officials, it took on average more than 130 days after receiving a complete application to make a determination in July 2011. As of October 2012, it took approximately 85 days. With the hiring of a new CVE director in December 2011, CVE conducted a review of the verification process to identify ways to increase its efficiency and began adopting changes to the verification process to improve its operations and address program weaknesses and applicants’ concerns. For example, as a result of this review, CVE did the following: CVE revised its Standard Operating Procedures (SOP) to reflect current practices and help ensure greater consistency in its verification processes. These procedures describe the purpose, scope, statutory references, staff roles, and implementation steps. CVE instituted a more robust quality assurance process to ensure that staff adhered to the approved procedures. For example, CVE employees and contractors are now subject to both scheduled and spot audits and must resolve any major deficiencies within 10 days. CVE hired its first training officer and revised its training program with the goals of ensuring that CVE staff were properly trained and qualified to perform their duties, achieved high performance, and were responsive to changing business requirements, among other things. The training officer is responsible for coordinating training for staff (including contractors), including weekly training on the verification program and customer service, as well as monthly fraud education. CVE added specific methods of communicating with applicant firms, with the goal of ensuring that applicants receive an email from VA at least every 30 days with an update on the status of their application. CVE began using the initiation phase, rather than the examination phase, to determine whether an application was complete, so that any missing documentation or inadequate explanations can be addressed before the examination process begins. CVE began tracking staff productivity levels and more closely monitoring the quality of their work. For example, beginning in the spring of 2012, CVE started setting targets for the numbers of cases that individual staff members should review each day. Also, in September and October 2012, CVE’s evaluation team reviewed the results of contractors’ examinations to identify cases that had not been properly completed or in which the recommended finding should be overturned.recommended for additional training. Contractor staff with the most rejected cases were VA has also revised the organizational structure for the verification program, but VA officials said that human capital challenges remained. In general, CVE is structured so that a federal employee oversees a team of contractors. VA uses a mix of federal employees and contractors to complete verifications because contractors have greater flexibility to adjust staffing levels in response to variations in the number of applications submitted, according to VA officials. Between December 2011 and October 2012, VA adopted changes to the verification program’s organizational structure to make it more efficient, increase oversight of federal staff and contractors, and strengthen functions that did not previously have dedicated staff, such as training. VA officials (1) reorganized and increased the number of employees and contractors assigned to the verification process and (2) created several new teams including quality assurance, training, records management, and customer service. During this period, VA added about 3 full-time equivalent staff and 64 contractors to the verification program.organization charts as of December 2011 and October 2012.) However, VA officials said that the verification program faces ongoing human capital challenges. For example, 5 of the verification program’s 27 full-time federal positions were vacant as of November 2012. As of early November 2012, CVE was developing a business case to justify the staff organization necessary to support verification operations, including revised federal employee labor categories and modifications to contractor support. (See app. III for the verification program’s VA has also sought to improve outreach to applicants through additional online resources and a new Verification Counseling program. In November 2011, VA began posting on its website verification assistance briefs intended to clarify aspects of the program’s rules. These briefs cover topics that VA officials have determined cause the majority of denials, such as full-time control and transfer restrictions. In addition, VA launched a self-assessment tool in June 2012 to help applicants understand the rules, regulations, eligibility criteria, and review process for verification. Recognizing that some applicants needed additional support, VA launched a Verification Counseling program in June 2012. According to VA, this program integrates the Verification Assistance Partner counselors (initially, selected veterans’ support organizations and business associations and Procurement Technical Assistance Centers) into the regular training provided to CVE examination and evaluation staff. These counselors in turn provide counseling to firms interested in becoming verified. The program is intended to increase understanding of the verification program’s eligibility requirements so that ineligible firms would be less likely to apply and eligible firms would be more likely to submit the materials necessary for them to succeed in their initial applications. To mitigate an anticipated increase in its workflow over time, VA initiated two efforts in early 2012 to modify its approach to reverifying firms’ eligibility. VA’s verification regulations issued in May 2008 limited the term of the verification status of a firm to a 1-year time period. However, as growing numbers of firms verified under the 2010 process began to require reverification in early 2012, VA recognized that it would face a mounting workload over time if it reverified firms annually using its full examination procedures. As a result, VA began to develop procedures for a simplified reverification process, which it introduced in early June 2012. VA also began the process of modifying the verification program regulations to extend the verification period from 1 year to 2 years and published an interim final rule to this effect in late June 2012. As a result of the rule change, additional firms eligible for simplified reverification will not begin reaching the expiration of their verification period until February 2013. In late October 2012, VA determined that a firm would be eligible for simplified reverification one time before again requiring full examination (i.e., once every 4 years). Despite the steps that VA had taken since December 2011, the Secretary of Veterans Affairs acknowledged ongoing concerns about the program and announced the creation of a senior executive task force to review the verification program and determine whether it had sufficient resources and support. The task force, created in June 2012, was initially charged with reporting back within 60 days with suggested changes that would help streamline the verification process. In August 2012, the task force adopted a charter stating that its purpose was to review all aspects of the verification program, including processes, operating policies, management information systems, staffing, and resources. The task force presented its preliminary findings to the VA Chief of Staff in early November 2012. The review results and recommendations of the task force were expected to be provided to the Office of the Secretary for final approval during the second quarter of fiscal year 2013. During the period covered by our review, VA had not created a formal strategic plan for the verification program. However, in response to our inquiries, OSDBU compiled a strategic planning document in late October 2012 that covered the verification program. This plan was based on a series of planning documents that were initially developed between June and December 2011 for internal discussions and conversations with congressional staff. This initial strategic planning effort appears to have at least partially followed key leading federal strategic planning practices, but additional progress is needed to improve the usefulness of the plan. We have previously reported that agency-wide strategic planning practices required under the Government Performance and Results Act of 1993 (GPRA)—which was amended by the GPRA Modernization Act of 2010 (GPRAMA)—can also serve as leading practices for planning at lower levels within federal agencies, such as individual programs or initiatives. We have also previously identified six leading practices in federal strategic planning that are most relevant to initial strategic planning efforts: (1) defining the mission and goal; (2) defining strategies that address management challenges and identifying resources needed to achieve goals; (3) ensuring leadership involvement and accountability; (4) involving stakeholders; (5) coordinating with other federal agencies; and (6) developing and using performance measures.of each of these practices, see appendix I. According to OSDBU and CVE officials, VA did not develop a formal strategic plan when it was initially developing the verification program because the primary concern at the time was to develop and implement initial verification procedures and program regulations. Once the program was launched in 2008, CVE continued to make its immediate operational challenges a higher priority than long-range strategic planning. Although VA’s focus on getting the verification program running and reacting to legislative change may have seemed reasonable at the time, its failure to develop a comprehensive strategic plan contributed to programmatic inefficiencies. For example, as discussed in greater detail later, VA developed the data system for the verification program without fully considering its long-term information needs. Resulting shortcomings of the system have required CVE to develop inefficient workarounds to operate and oversee the program. After the new OSDBU executive director started in April 2011, OSDBU began developing planning documents for 2011 through 2012 that covered OSDBU and its three mission areas—the verification program, strategic outreach, and acquisition support. After we asked VA about the lack of a strategic plan for the verification program, OSDBU officials compiled the separate OSDBU planning documents into a single document and updated them to include some milestones, tasks, and metrics for 2013. VA officials said that they considered this document to be the strategic plan for OSDBU and the foundation of its efforts for fiscal year 2013, and found that the compiled document could serve as a more comprehensive basis for future planning. Based on our review of the strategic plan and the six documents OSDBU drew upon to compile it, as well as OSDBU officials’ description of the process they undertook to develop these documents, OSDBU appears to have at least partially applied the six leading federal strategic planning practices that we previously identified, as described below. Defining the mission and goals. The plan provides OSDBU’s primary mission and alludes to the components of the mission for the verification program (verifying eligible firms and preventing ineligible firms from being verified), but does not explicitly describe the verification program’s mission. The plan identifies broad, long-term goals for OSDBU, which according to OSDBU officials were initially intended to be achieved by 2012. These goals include achieving a sustainable organizational structure to support its mission and ensuring compliance with all statutory requirements. Long-term objectives for the verification program include, among other things, meeting all regulatory requirements, providing quality customer experience, certifying CVE’s processes and staff, and preventing ineligible firms from being verified through rigorous quality control. As we have previously reported, goals in strategic plans should ideally explain what results are expected and when to expect those results. Thus, such goals are an outgrowth of the mission and are often results-oriented. However, based on the broad wording of some of the goals and objectives for the verification program, assessing whether they have been accomplished and the results achieved would be difficult. Defining strategies that address management challenges and identify resources needed to achieve goals. The planning documents identify management challenges that affect the verification program, such as human capital and technology. For example, the planning documents note that verification staff need training on the verification requirements. While the compiled strategic plan does not identify the specific resources necessary to overcome these challenges, it lays out strategies with more specific tasks to address them, such as developing and conducting staff training for verification. Ensuring leadership involvement and accountability. According to OSDBU officials, more senior VA officials were aware of OSDBU’s long-term goals, and OSDBU regularly briefed VA’s Chief of Staff and other senior VA officials on its plans and progress. However, while OSDBU compiled the strategic plan itself in late October 2012, it had not yet been reviewed or approved outside of OSDBU as of early November 2012, and we could not assess whether or how senior VA leaders would be involved in monitoring its implementation. To help hold managers accountable for elements in the plan, OSDBU officials said that the Executive Director met regularly with staff to discuss their plans and performance. For example, the officials said that OSDBU and CVE officials hold weekly meetings to discuss the status of the verification program and applications reviewed. Involving stakeholders. OSDBU officials said that they had briefed stakeholders, including congressional staff and committees, while developing the initial planning documents for 2011 and 2012 that formed the basis for OSDBU’s strategic plan. The officials said that OSDBU’s planning was informed by extensive feedback on the verification program from the VA acquisition community, congressional staff and committees, veteran support organizations and business associations, and veteran-oriented media, as well as through direct contact with applicants. However, since the strategic plan was only recently compiled in response to our review, VA had not shared the plan with key stakeholders, thus missing an opportunity to promote transparency of the verification program’s plans and priorities and to facilitate continued stakeholder involvement. Coordinating with other federal agencies. OSDBU officials said that they met with officials from other agencies’ OSDBUs prior to the development of the planning documents to discuss the verification program, in particular the program’s potential government-wide expansion. An official said that they did not coordinate with SBA— which administers the government-wide SDVOSB contracting program and certifies the eligibility of firms for other government-wide contracting programs—when they were developing the planning documents. Developing and using performance measures. The strategic plan that OSDBU compiled contained “metrics” related to the verification program that consisted of a combination of output, efficiency, and customer service measures but lacked quality and outcome measures aligned with long-term goals. Over 80 percent of the metrics in the plan (31 of 38 items) related to the implementation of a specific task rather than whether the desired outcomes are being achieved. For example, the verification-related metrics for 2013 include “provid improved training of CVE staff” and “review fraud training program with OIG,” in support of a strategy to improve the capability of CVE staff to perform accurate and timely evaluation of applications and detect misrepresentation and fraud. But the plan does not identify measures that could be used to assess the impact of the identified long-term goals and strategies, such as a reduction in the number of examinations that are not properly completed. As previously discussed, CVE has begun tracking staff productivity levels and more closely monitoring the quality of their work, but these measures are not included in the strategic plan. Recognizing some of the challenges with its existing measures, VA has undertaken a recent initiative with a university to improve OSDBU’s performance measures. OSDBU officials expected to incorporate these measures into future planning efforts. Lastly, OSDBU’s initial strategic planning effort was more short-range than long-range in focus. GPRAMA requires that agency-level strategic plans cover at least a 4-year period.OSDBU developed in 2011 only covered 2011 through 2012 because they expected the program to have achieved its initial long-term goals within that time, according to OSDBU officials. In compiling the strategic plan to respond to our enquiries, OSDBU officials told us that they recognized the value of expanding the coverage of the plan to include strategies and metrics for activities to be completed in 2013. But the plan did not include strategies and metrics beyond 2013. The longer-term focus of a strategic plan is one of the key distinctions from a performance plan that focuses on annual goals and measures. Without a longer-term perspective, the current strategic plan serves as more of a short-term management plan rather than as a longer-term guide to help frame the needs and direction of the verification program. The verification program’s current data system lacks certain data fields and reporting and workflow management capabilities needed to provide key information for program management. We have previously reported that an agency must have relevant, reliable information to run and control its operations. More specifically, we have noted that pertinent information should be identified, captured, and distributed to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities efficiently and effectively. Since the verification program began in 2008, VA has relied on data systems that it developed on an incremental, ad hoc basis in response to immediate needs, without an overarching plan or vision, and without centralized oversight by VA’s Office of Information and Technology (OI&T). As stated earlier, VA initially did not develop a strategic plan that might have provided a framework for envisioning the verification program’s information needs from the outset. Rather, VA initially modified its existing VIP database to address only its immediate need to accept firms’ application forms and identify verified firms. VA staff also created a separate database to track the results of checks that it used to verify that firms met its basic eligibility requirements, such as veteran and service- disability status. When VA began requiring firms to submit a standardized set of documents under the 2010 process, these documents were collected in a variety of formats, and paper copies had to be manually uploaded to CVE secure servers, according to VA officials.According to VA, in some cases documentation was shredded to protect confidential information without first being uploaded to the server. In response to these problems, VA hired a contractor to develop the Verification Case Management System (VCMS), which went online in 2011. VCMS was integrated with VIP to enable VA to better track and retrieve documents and manage the verification process. According to VA officials, the project was managed by the program office because VIP and VCMS were funded by VA’s Supply Fund, and not through appropriated information technology funds, which are overseen by OI&T. The resulting VIP/VCMS system aids in performing some tasks. For example, VIP/VCMS allows applicants to upload documents directly into the web-based system, and applicants and VA to track applicants’ broad phase of review (i.e., initiation, examination, or evaluation). VA staff and contractors can also use VIP/VCMS to send and maintain a record of emails to the applicant to, for example, request additional documentation, provide status updates, or send the determination letter. The system also allows VA officials to run some reports, such as the number of initial applications and requests for reconsideration that have been approved, denied, withdrawn or completed by year as well as the open applications that have been in the system for more than 90 days. However, VIP/VCMS has significant shortcomings that could have been avoided with better planning for and oversight of the system’s development. Specific areas with remaining shortcomings include the following: Data fields. VA officials said that, because of the need to get a system in place quickly, the responsible staff at the time did not consider all of the data elements that would be useful for monitoring program trends and staff performance and did not plan for future phases that would add more data fields. For example, VCMS did not include data fields to track the reasons for denial (i.e., specific eligibility, ownership, or control issues); the basis for requests for reconsideration and their outcomes; and the incidence and reasons for applications being returned to a lower level of the process for rework or for a reversal of a contractor or staff member’s recommendation to approve or deny an application. VCMS also lacks fields to facilitate monitoring the reasons for and results of customer service inquiries. Reporting. VCMS currently also allows only limited reporting, and users cannot always customize their search criteria to obtain data in the form they need to monitor the program. We noted in our August 2012 report that VCMS’s limited reporting capabilities, and the lack of certain data within the system, resulted in inconsistent aggregate reporting and made tracking the inventory of firms difficult for VA. In response, VA conducted a laborious process to develop a manual inventory of firms that have been verified under the 2006 and 2010 Act processes and the dates that those firms were verified, which VA staff could not obtain directly from VCMS. Workflow management. VCMS has the capability to track which broad phase of the verification process an application is in and to record which staff completed certain actions in the system, but it does not meet VA staff or contractors’ needs for assigning and monitoring the progress of applications. As a result, the contractor that initially examines applications relies on a workflow management system outside of VCMS to assign and track applications as they move through the steps of the examination phase. Similarly, each of the team supervisors that we talked to has created spreadsheets to track the status of applications as their team reviews them. The reliance on these other systems is inefficient and increases the risk that data will not be completely or accurately recorded across systems. In addition, VCMS experienced periodic outages following its initial launch in 2011 and after its most recent modification. According to VA officials, VCMS crashed almost immediately after its launch in May 2011 because it could not handle the volume of data that VA began receiving. The system was also off-line for a month in September 2011 following another modification. During these outages, firms that had a contract pending and needed to be verified in order to receive the contract could be manually processed on a case-by-case basis. More recently, VCMS was unavailable from May 9 to June 6, 2012, during which time applicants could not submit new applications, and staff could not request additional documentation through the system. This outage was caused by a security problem that was identified in routine testing by OI&T as VA was preparing to launch a modification to the system. VA is in the process of planning to either modify or replace the current version of VCMS to address the identified shortcomings, but this planning effort has not been tied to broader long-term strategic planning for the verification program. VA officials have identified elements that the next iteration of VCMS should include. For example, the officials would like the program to automate some aspects of the background company research and generally make the verification process less burdensome on veterans. Following the program outage in May 2012, verification program officials began reaching out to OI&T for assistance in overseeing both the current system and a potential modification or replacement. These discussions received further emphasis through the previously discussed senior executive task force, which included representatives of OI&T. As a result of an expected recommendation by the task force, OI&T assigned staff in July 2012 to begin formally planning for either a modification or a replacement system, a process that OI&T will manage. VA is considering short- and long-term information needs as it defines the business requirements for the system. But, as we have seen, the initial strategic plan that OSDBU developed in late October 2012 does not specify longer-term goals for the verification program or define program strategies and activities beyond 2013. Without tying the effort to modify or replace the verification program’s data system to more comprehensive, long-term strategic planning, the resulting system risks again failing to meet the verification program’s long-term needs and goals. Expanding VA’s verification program to support the government-wide SDVOSB contracting program would require VA to increase the scale of its program to verify potentially thousands of additional firms. VA has faced ongoing challenges implementing its verification program, and it would need to continue to stabilize and improve its verification operations by addressing remaining vulnerabilities to fraud and abuse, demonstrating whether recent operational changes have resulted in improved performance and whether new methods for educating applicants are effective, and addressing data system limitations. Also, as VA revises its verification program regulation, it is considering policy issues that would impact a government-wide verification program. VA has not formally projected how many firms it might need to verify under a government-wide SDVOSB verification program, and a number of factors make such a projection difficult. For example, the scale of a program would depend on whether firms would be required to obtain verification to bid on contracts or only to receive contract awards, how likely firms that are already self-certified as SDVOSBs would be to seek verification if it were required, and how many new or existing SDVOSBs that have not yet self-certified might seek verification in the future. The estimate of 12,800 firms is based on the number of self-certified SDVOSBs (i.e., registered in CCR but not yet verified by VA as of Sept. 30, 2012) that did not receive contract obligations in fiscal years 2010 or 2011 (the last full fiscal year available), according to FPDS-NG. future. As a result, predicting how many would actually be motivated to seek verification if it were required is difficult. Beyond firms that have already registered as prospective federal contractors, thousands of existing or new SDVOSBs could eventually register and seek verification if it were required. We did not identify a current estimate of the number of SDVOSBs in the United States. However, an SBA analysis of data from the Census Bureau’s 2007 Survey of Business Owners found that there were around 200,000 service-disabled veteran-owned businesses (of any size) at that time. Considering the additional operational challenges that VA would face in preparing to verify potentially thousands of additional firms, VA would need to continue to address existing program weaknesses to stabilize and improve its verification program. Our prior and current work indicates that several aspects of VA’s current verification program, specified below, would have to be addressed before the program could be effectively implemented government-wide. U.S. Small Business Administration, Office of Advocacy, Veteran-owned Businesses and their Owners—Data from the Census Bureau’s Survey of Business Owners (Washington, D.C.: March 2012). SBA reported that there were more than 2.1 million nonservice-disabled veteran-owned businesses (of any size) in 2007. debarments and prosecutions of firms found to be misrepresenting their SDVOSB status, had not been implemented. As of January 4, 2013, we were reviewing documentation that VA had recently provided to determine whether VA’s actions are sufficient to consider some of the recommendations implemented. Operations. A major expansion of the verification program would have a greater chance at success if its priorities and operations were more stable and if the recent changes that VA adopted were shown to have improved the program’s performance. For example, the steps VA has taken to standardize its procedures and make them more efficient, improve its quality assurance process, and enhance training of CVE employees and contractors are promising. However, it is too soon for us to test the effectiveness of these evolving procedures, and it is not clear whether VA will adopt further significant changes as a result of the recommendations of the senior executive task force reviewing the verification program. Applicant Education. Because a government-wide program would potentially affect thousands of additional firms, VA would need to have in place effective methods for educating business owners about the program and for obtaining and responding to their feedback. VA officials suggested that the agency’s recent efforts to clarify online guidance for applicants and to partner with organizations to better educate applicants about the verification requirements were intended to help firms understand the rationale for the required documentation and explain how VA interprets the documents submitted in making its determinations. The officials described plans for collecting the data they would need to evaluate these efforts, which included assessing whether denial rates differed for firms that used the online guidance or received assistance from a partner organization and those that did not. Information technology. As we noted earlier, the limitations of the verification program’s information system (VIP/VCMS) have hampered VA’s ability to effectively manage, monitor, and report on the program’s operations and results. Addressing these limitations by, for example, ensuring that the information system collects the data needed to monitor the consistency and accuracy of VA’s determinations, allows customized reporting to meet managers’ needs, and supports efficient workflow management would also help position VA to manage an expanded government-wide program. Furthermore, CVE and OI&T officials said that, in planning to modify or replace VIP/VCMS, they were factoring in the potential need for the system to have the capacity and flexibility to expand to a government- wide scale and to be adapted for automated interagency information sharing. For example, the officials said they were planning to consider how to enable contracting officers from other agencies to determine whether an SDVOSB was verified without having to manually search for the firm in VIP. In addition, VA has begun a process to revise the verification program’s regulations, which would likely serve as the starting point if VA were charged with implementing a government-wide verification program. VA officials said that they were planning to revise the regulations partly in response to applicants’ and veterans’ organizations concerns about VA’s eligibility standards. For example, two veterans’ organizations questioned VA’s regulatory requirement that veteran owners be able to transfer their ownership interest without restriction by nonveteran owners, effectively suggesting that VA’s standard for establishing control of a firm is too strict. The organizations stated that because nonveteran owners might reasonably expect to have a say in such transfers, the requirement limited the ability of SDVOSBs and VOSBs with nonveteran minority owners to participate in the Veterans First program. VA officials said that they would weigh this and other concerns as they developed proposed revisions to the regulation, a process that they expected to result in a final rule by mid-2014. Any changes to VA’s verification requirements could create or widen differences between the various government-wide small business contracting programs’ requirements and VA’s, a consideration that would likely be of even greater importance if VA’s verification program were expanded. In addition to the government-wide SDVOSB program, federal contracting preference programs give federal agencies the authority to set aside contracts for small business concerns and specific types of small businesses: women-owned small businesses, businesses located in historically underutilized business zones (HUBZone), and socially and economically disadvantaged small businesses participating in SBA’s 8(a) program. While the SDVOSB and women-owned small business programs allow firms to self-certify their eligibility, SBA reviews supporting documentation to certify HUBZone and 8(a) firms, with the 8(a) program requiring more extensive documentation similar to what is required under VA’s verification program. (See app. IV for a description of these programs and their verification requirements.) Some veterans’ organizations and others with whom we spoke have cited perceived differences between VA’s eligibility standards and SBA’s standards for the government-wide SDVOSB program and the 8(a) program, whose certification process is most similar to VA’s verification program. However, VA and SBA officials worked together to compare the three programs’ regulations and VA’s and the 8(a) program’s documentation requirements. Initially, VA and SBA officials told us that they did not find major differences in the programs’ regulatory eligibility requirements, the agencies’ interpretation of them, or the documentation requirements for verification. In commenting on a draft of this report, SBA subsequently stated that, while the wording of the regulations pertaining to eligibility requirements was comparable, there was a distinction regarding ownership by spouses of disabled veterans. SBA also stated in its comment letter that there were some key differences in how the agencies interpreted the regulations and that the agencies were consulting with one another to determine whether those differences could or should be resolved. Going forward, if VA adopts unilateral changes to its verification policies and procedures, these changes could have the effect of making it more difficult to align the programs. VA officials told us that the tension between competing calls for VA to ease its requirements and to be consistent with the government-wide SDVOSB and 8(a) programs would be a major consideration as VA considered changes to its regulations— particularly considering the potential for a government-wide SDVOSB verification program. Accordingly, the officials said that they were consulting with SBA as they began to develop proposed changes to VA’s verification program regulation. The opportunity to receive set-aside or sole-source contract awards under the Veterans First program is a significant benefit that provides billions of dollars in contracts annually to SDVOSBs and VOSBs. As a result, the program warrants strong internal controls to provide reasonable assurance that the contracts VA enters into are awarded to eligible firms. At the same time, an inherent tension exists between the need for effective internal controls and the Veterans First program’s goal of increasing contracting opportunities for SDVOSBs. If VA fails to correctly verify eligible firms, or if firms’ concerns about the verification process deter them from applying, VA’s ability to sustain its high levels of contracting with SDVOSBs and VOSBs could ultimately be at risk. VA has made progress toward reducing its vulnerability to fraud and abuse, and CVE’s new management team has initiated a variety of operational changes in an effort to improve the program. VA has also initiated efforts to develop a comprehensive strategic plan for the verification program. This initial strategic planning effort represents a positive step that appears to have at least partially applied key leading federal strategic planning practices. However, the initial plan includes only goals intended to be met within 2 years, and many of the performance measures focus on the implementation rather than the outcomes of activities. Additionally, VA has not shared the plan with key stakeholders. As it continues to develop and refine its strategic plan, VA could strengthen its effort by ensuring that the plan articulates results-oriented, long-term goals and objectives for the verification program, that the metrics are focused on outcome measurements that can be used to monitor the verification program’s performance and demonstrate results, and that key stakeholders are involved in evaluating the plan. The initial lack of a comprehensive strategic plan for the verification program has also contributed to the development of a data system that has proven to be inadequate. The system does not collect data for monitoring program trends and staff performance, has limited reporting and workflow management capabilities, and has been unable to accept applications for extended periods, hindering VA’s ability to operate and monitor the verification program. VA has started taking steps to address the shortcomings in the data system by shifting responsibility for developing plans to enhance or replace VCMS from CVE to OI&T. But without tying that effort to long-term strategic planning, VA risks failing to meet the program’s information needs going forward. As VA revises its verification program regulations and considers the relationship between its policies and those of other federal small business contracting preference programs, the agency faces a tension between competing calls to reduce the burden on applicants and to be vigilant in preventing and detecting fraud. This tension would underlie a government-wide SDVOSB verification program as well. Addressing these policy issues for its own program—or ultimately for a government- wide verification program—will require VA to weigh certain tradeoffs. These include deciding how to reduce the administrative burden that the verification process places on eligible firms and maintain sufficient fraud prevention and detection controls to provide reasonable assurance that the billions of VA contract dollars set aside for SDVOSBs and VOSBs reach their intended beneficiaries. To improve the management and oversight of VA’s SDVOSB and VOSB verification program, we recommend that the Secretary of Veterans Affairs take the following two actions: Direct OSDBU to continue to develop, refine, and implement a formal strategic plan to provide a comprehensive framework to guide, integrate, and monitor the verification program’s activities over time. As OSDBU refines the strategic plan, it should incorporate longer- term goals and objectives for the verification program. The plan should also incorporate outcome measures that OSDBU can use to better monitor the verification program’s progress and demonstrate its results. OSDBU should also share the plan with key stakeholders. Direct OSDBU and OI&T, as they modify or replace the verification program’s data system, to integrate their efforts with OSDBU’s broader strategic planning effort for the verification program to ensure that the new system not only addresses the short-term needs of the program but also can be readily adapted to meet longer-term needs. We provided a draft of this report to the Department of Veterans Affairs and the Small Business Administration for comment. In its written comments, VA generally agreed with GAO’s conclusions and concurred with the two recommendations. VA stated that it had actions under way that would address each recommendation. VA indicated that it anticipated submitting a strategic plan to the Office of the Secretary in fiscal year 2013 and would develop a schedule to brief VA senior leaders and other key stakeholders once the plan is approved. VA also provided additional information about its efforts to replace the verification program’s data system. VA noted that it had begun the process of replacing the existing system and had developed a work statement for the replacement system. VA also provided technical comments that we incorporated as appropriate into the report. In its technical comments, VA disagreed with the status of some of the prior GAO recommendations that we noted had not been fully implemented, including the provision of regular fraud awareness training and unannounced random and risk-based audits of verified firms to ensure compliance with the program rules. We have revised the report to indicate that, as of January 4, 2013, we were reviewing documentation provided by VA in December 2012 to determine if VA’s actions taken to address some of our prior recommendations are sufficient to consider them implemented. We also noted that we will continue to review documentation provided by VA in the future to assess whether the remaining recommendations have been implemented. In its written comments, SBA provided additional information on its views on eligibility requirements for VA’s Veterans First Contracting Program, the government-wide SDVOSB contracting program, and the 8(a) program. In particular, SBA stated that a statement in our draft report was not accurate—specifically, our comment that VA and SBA did not find major differences in the programs’ eligibility requirements, the agencies’ interpretation of the requirements, or the documentation required for verification. SBA noted that, statutorily, surviving spouses of disabled veterans might be eligible for VA verification but that they were not eligible under SBA’s regulations for the government-wide SDVOSB program. SBA also noted that it provided an avenue of appeal through its SDVOSB status protest and 8(a) eligibility processes but that VA did not have a similar appellate procedure. Finally, SBA stated that the wording of the regulations pertaining to VA’s and SBA’s eligibility requirements was similar but that there were some key differences in interpretation that the two agencies were reviewing. We have revised our discussion of VA’s and SBA’s effort to compare the programs’ eligibility and documentation requirements, citing the difference noted by SBA with respect to the eligibility of surviving spouses and noting that the agencies were consulting with each other to determine whether differences of interpretation could or needed to be resolved. We also added clarifying language in appendix I describing how we obtained information on VA and SBA efforts to compare program regulations. In addition, we clarified the differences between SBA’s and VA’s status protest mechanisms in appendix IV. VA’s and SBA’s comments are reprinted in appendixes V and VI. We are sending copies of this report to the appropriate congressional committees, the Administrator of SBA, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Our objectives were to (1) describe and assess the progress that the Department of Veterans Affairs (VA) has made in establishing a program to verify the eligibility of service-disabled veteran-owned small businesses (SDVOSB) and veteran-owned small businesses (VOSB) on a timely and consistent basis, and (2) describe the key operational and policy issues that VA would need to address should its verification program be implemented government-wide. Because VA was introducing significant changes to its procedures and operations at the time of our study, we determined that evaluating VA’s compliance with its past procedures would be of limited value and that testing the effectiveness of verification procedures that were still evolving would be premature. We focused instead on issues related to planning for and designing the verification program and on changes in its management and operations that the Center for Veterans Enterprise (CVE) instituted since December 2011. 75 Fed. Reg. 6098 (Feb. 8, 2010); 77 Fed. Reg. 38181 (June 27, 2012). November 1, 2011, and September 30, 2012, and VA’s Verification Master Inventory List, a manually maintained inventory of all verified firms that VA uses to supplement VCMS. We assessed the data by interviewing knowledgeable VA officials, reviewing related documentation, and checking the data for illogical values or obvious errors and found them to be sufficiently reliable for the purpose of illustrating general characteristics of the verification program. We also interviewed officials from three of the contractors who perform aspects of the verification process—GCC Technologies, LLC; HeiTech Services, Inc.; and Addx Corporation—to understand their roles in the verification program, and representatives from three veteran service organizations and a technical assistance association that were participating in the Verification Counseling program–VETForce, American Legion, National Veteran Small Business Coalition, and Association of Procurement Technical Assistance Centers—to discuss their views on the verification program and their expectations of the Verification Counseling program. Pub. L. No. 103-62 (August 3, 1993); Pub. L. No. 111-352 (Jan. 4, 2011). GPRAMA provides federal agencies with an approach to focusing on results and improving government performance by, among other things, developing strategic plans. Examples of GPRAMA plan components include a mission statement; general goals and objectives, including outcome-oriented goals; and a description of how the goals and objectives are to be achieved, including the processes and resources required. leading practices that we had previously identified as being relevant to agencies’ initial strategic planning efforts. We reviewed a strategic planning document that OSDBU compiled in 2012 in response to our study, and six planning documents prepared between June 2011 and December 2011 that OSDBU officials said provided the basis for the strategic plan. We compared these documents, and the planning activities associated with them, to the six leading practices, as shown in table 1. Because VA prepared the initial strategic planning document as we were completing our draft report, we did not conduct a comprehensive review of the strategic plan, the supporting documents that VA provided, or the process that VA undertook to develop these documents. We also assessed the extent to which the verification program’s data system provided the information needed to run and control the verification program’s operations, a key standard for effective internal controls. In particular, we focused on the timely availability of pertinent information sufficient to enable people to carry out their duties efficiently and effectively, a factor that we previously identified as important in assessing this standard. We reviewed data system documentation and reports that the system produces and interviewed officials from VA and the contractors that perform aspects of the verification process to determine how the data system was developed and how VA uses it, and to identify the capabilities and limitations of the data system. For verified SDVOSBs that did not appear as self-certified in CCR as of March 2012, we cross-referenced the Small Business Administration’s (SBA) Dynamic Small Business Search, which includes supplemental information on registered firms that meet SBA’s size standard for the firms’ industries. under VA’s existing program regardless of whether a government-wide program was adopted. In addition, we could not determine how many of these firms have been actively seeking contracts or how likely they would be to do so in the future, making it difficult to predict how many would actually be motivated to seek verification if it were required. We assessed these data by interviewing VA officials knowledgeable about the VA data, reviewing documentation related to all of the data systems, and checking the data for illogical values or obvious errors and found them to be sufficiently reliable for the purpose of illustrating the potential scale of a government-wide verification program. We also reviewed our prior work on the verification program and that of the VA Office of Inspector General, as well as our assessment of the current status of the program, to identify issues that VA would need to address in implementing a government-wide program. Because of the Small Business Administration’s (SBA) role administering the government-wide SDVOSB program, we also interviewed VA and SBA staff about how the statutory and regulatory provisions implemented by the two agencies compare. In addition, we reviewed SBA documents and interviewed SBA staff for their views on a potential government-wide verification program. However, the SBA staff said that it would be inappropriate for them to comment on VA’s or SBA’s potential roles or other considerations in implementing a potential program. For both objectives we interviewed officials in VA’s Office of Small and Disadvantaged Business Utilization (OSDBU), CVE, Office of the General Counsel, and the Office of Information and Technology to understand their historical, current, and expected roles in the verification program. We also reviewed prior GAO reports and a VA Office of Inspector General report on the verification program and testimonies from congressional hearings on the government-wide SDVOSB program and VA’s verification program. We conducted this performance audit from February 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We reviewed VA’s database known as the Verification Case Management System (VCMS) to obtain data on the status of initial applications, requests for reconsideration, and applications for reverification submitted to VA between November 1, 2011, and September 30, 2012. We chose this period because fiscal year 2012 roughly coincided with the December 2011 to November 2012 period that was the focus of our work. Because we were primarily interested in the progress that VA had made processing applications that were submitted during the period that was the focus of our work, we excluded from our analysis applications VA processed during the period but that were submitted prior to November 1, 2011. We used these data to determine the volume of applications that VA received between November 1, 2011, and September 30, 2012, and their status (pending, withdrawn, approved, or denied) as of September 30, 2012. Because our analysis included applications that had been submitted less than 90 days ago, we expected a significant number of the cases to be pending as of September 30, 2012. Based on our analysis of VCMS data, VA received approximately 4,900 initial applications between November 2011 and September 2012. The monthly volume of initial applications fluctuated during this period, with VA receiving an average of about 450 initial applications per month. As shown in figure 3, approximately 14 percent of the 4,900 initial applications submitted during the period were pending a determination as of September 30, 2012, and another 43 percent had been withdrawn. The remaining 43 percent of applications had received a determination and, of these, 61 percent were approved and 39 percent were denied. Applicants can withdraw their applications at any time, or VA can withdraw an application if the applicant does not respond to requests to provide missing or additional requested documentation within 30 days. Between December 2011 and October 2012, VA revised the organizational structure for the verification program. As shown in figures 6 and 7, VA officials (1) reorganized and increased the number of employees and contractors assigned to the verification process and (2) created several new teams including quality assurance, training, records management, and customer service. As of October 2012, the verification program had about 28 full-time equivalent federal employees and 174 contractors, an increase of about 3 full-time equivalent staff and 64 contractors to the verification program since December 2011. All federal agencies have the authority to set aside contracts for small business concerns and for several specific types of small businesses: SDVOSBs, women-owned small businesses, businesses located in historically underutilized business zones (HUBZone), and socially and economically disadvantaged small businesses participating in SBA’s 8(a) program (table 2). Some programs are also authorized to make sole- source awards to these groups. For the government-wide SDVOSB program, business owners are required only to certify their eligibility online in the System for Award Management (SAM) and do not need to submit any supporting documentation. SBA does not verify the eligibility of these firms. Women-owned small businesses may obtain certification by an entity approved by SBA or self-certify their eligibility online in SAM; in either case, the firms must upload supporting documents to SBA’s online Women-Owned Small Business Program Repository for potential review by contracting officers or SBA. In contrast with these self- certification programs, SBA must certify firms’ eligibility to receive contracts under the HUBZone and 8(a) programs. SBA reviews supporting documentation to certify HUBZone and 8(a) firms, with the 8(a) program requiring more extensive documentation that is similar to that required by CVE for its verification program. For each of the government-wide small business contracting preference programs except for the 8(a) program, SBA provides a “status protest” mechanism for interested parties to a contract award to protest if they feel a firm misrepresented its eligibility in its bid submission. SBA’s status protest mechanism for the SDVOSB and women-owned small business programs and its certification process for the 8(a) program also provide interested parties with an avenue of appeal to SBA’s Office of Hearings and Appeals. However, VA’s OSDBU decides any SDVOSB or VOSB status protests arising from a VA solicitation.appellate procedure for such decisions. In addition to the contact named above, Harry Medina (Assistant Director), Emily Chalmers, Pamela Davidson, Julianne Dieterich, Julia Kennon, Cory Marzullo, John McGrail, Daniel Newman, Jena Sinkfield, James Sweetman, and William Woods made key contributions to this report.
VA is required to give contracting preference to service-disabled and other veteran-owned small businesses. It must also verify the ownership and control of these firms to confirm eligibility. Prior reports by GAO and VA's Office of Inspector General identified weaknesses in VA's processes and controls that allowed ineligible firms to be verified. GAO was asked to review the verification program. For this report, GAO assessed (1) VA's progress in establishing a program for verifying firms' eligibility on a timely and consistent basis and (2) key operational and policy issues that VA would have to address should its verification program be implemented government-wide. GAO reviewed VA's policies and procedures; compared its initial strategic planning effort with previously identified leading strategic planning practices; interviewed VA officials and veterans' organizations; and analyzed government-wide contracting databases. The Department of Veterans Affairs (VA) has made significant changes to its verification processes for service-disabled and other veteran-owned small businesses to improve operations and address program weaknesses, but continues to face challenges in establishing a stable and efficient program to verify firms on a timely and consistent basis. Since December 2011, VA has instituted a number of significant operational changes, including revising standard operating procedures and enhancing quality assurance protocols for its verification program. However, GAO found that VA did not have a comprehensive, long-term strategic plan for the program and had prioritized addressing immediate operational challenges, contributing to programmatic inefficiencies. In response to this observation, VA's Office of Small and Disadvantaged Business Utilization (OSDBU) initiated action in late October 2012 to compile a strategic planning document that encompassed the verification program. VA's OSDBU appears to have partially applied key leading strategic planning practices in its initial planning effort. But the plan lacks performance measures to assess whether the desired outcomes are being achieved and has a short-term focus that is not typically associated with a strategic plan. VA also has not shared the plan with key stakeholders, including congressional staff. Further, the verification program's data system has shortcomings that have hindered VA's ability to operate, oversee, and monitor the program. Among other things, the system does not collect important data and has limited reporting and workflow management capabilities. VA plans to modify or replace the system, but has not directly tied this effort into its long-term strategic planning efforts to ensure that the new system meets the verification program's long-term information needs. Expanding VA's verification program to support the government-wide contracting program for service-disabled, veteran-owned small businesses would require VA to improve its verification process and address a number of operational and policy issues. GAO estimated that between about 3,600 and 16,400 currently self-certified firms could seek verification under an expanded program, but VA has experienced ongoing challenges verifying the volume of firms currently participating in the program. GAO's prior and current work indicates that VA would need to further reduce its program's vulnerability to fraud and abuse, demonstrate whether recent operational changes have improved performance, have in place effective methods for educating applicants, and address the limitations of the program's data system in order to expand successfully. Also, VA has begun a process to revise the verification program's regulations, partly in response to concerns about VA's eligibility standards being too stringent. However, any changes to VA's verification requirements could create or widen differences between the various government-wide small business contracting programs' requirements and VA's, a consideration that would likely be of even greater importance if VA's verification program were expanded. Addressing these issues for its own program--or ultimately for a government-wide program-- requires weighing tradeoffs between reducing the burden of verification on eligible firms and providing reasonable assurance that contracting preferences reach their intended beneficiaries. To improve the long-term effectiveness of the program, VA should (1) refine and implement a strategic plan with outcome-oriented long-term goals and performance measures, and (2) integrate its efforts to modify or replace the program's data system with a broader strategic planning effort to ensure that the system addresses the program's short- and long-term needs. VA concurred with both recommendations.
To help measure the quality of the 2000 Census and to possibly adjust for any over- or undercounts of various demographic groups, the Bureau designed the A.C.E. program, a separate and independent sample survey conducted as part of the 2000 Census. When matched to the census data, A.C.E. data were to enable the Bureau to use statistical estimates of net coverage errors to adjust final census tabulations. However, in March 2003, after much research and deliberation, the Bureau decided against using any A.C.E. estimates of coverage error to adjust the 2000 Census, because of several methodological concerns. The Bureau measured the accuracy of the 1990 Census as well, and recommended statistically adjusting the results. However, the Secretary of Commerce determined that the evidence to support an adjustment was inconclusive and decided not to adjust the 1990 Census. In 1999 we examined how these statistical population estimates might have redistributed federal assistance among the states had they been used to calculate formula grants. Looking toward the 2010 Census, the Bureau plans to use statistical population estimates to (1) produce estimates of components of census net and gross coverage error (the latter includes misses and erroneous enumerations) in order to assess accuracy, (2) determine whether the strategic goals of the census are met, and (3) identify ways to improve the design of future censuses. The Bureau does not plan to use statistical estimates of the population for adjusting census data based on its belief that the 2000 Census demonstrated “that the science is insufficiently advanced to allow making statistical adjustment to population counts of a successful decennial census in which the percentage of error is presumed to be so small that adjustment would introduce as much or more error than it was designed to correct.” In fiscal year 2004, the federal government administered 1,172 grant programs, with $460.2 billion in combined obligations. However, as shown in table 1, most of these obligations were concentrated in a small number of grants. For example, Medicaid was the largest formula grant program, with federal obligations of $183.2 billion, or nearly 40 percent of all grant obligations, in fiscal year 2004. The top 20 grant programs comprised around two-thirds of all federal grant programs, with $307.9 billion in obligations for fiscal year 2004 (SSBG is not included in table 1, because with obligations of $1.7 billion, it is not among the top 20 formula grant programs). Based on our simulations, recalculating allocations of key programs using statistical population estimates, states would have shifted less than 0.25 percent of $161.4 billion in Medicaid and SSBG formula grant funding. The two key programs analyzed—Medicaid and SSBG—together received federal allocations of $161.4 billion in fiscal year 2004. Federal allocations for Medicaid (excluding such administrative costs as processing, making payments to service providers, and monitoring the quality of services to beneficiaries) were $159.7 billion, by far the highest federal allocation in fiscal year 2004. Using statistical population estimates to recalculate federal Medicaid allocations to states, states would have shifted 0.23 percent of $159.7 billion in federal Medicaid funds in fiscal year 2004 and 0.25 percent of $1.7 billion in SSBG funds would have shifted as a result of the simulations in fiscal year 2005. (Appendix IV contains tables showing the difference between using estimated and actual population data from the 1990 and 2000 Censuses for Medicaid and SSBG.) Because the two programs allocate state funding using different formulas, funding reallocations for the two programs may produce results that are different from one another for a particular state. For example, using the 2000 statistical population estimates, which were lower for Minnesota than the official census population count, Minnesota’s Medicaid allocation would have remained the same. This is because Medicaid allocations are subject to a floor, and Minnesota was already receiving the minimum required reimbursement. However, it would have lost funding under SSBG, because the statistical population estimates from the 2000 Census, and the subsequent recalculations, would have reduced funding. In another example, the District of Columbia allocation would have remained the same for 2000 under Medicaid, because the District of Columbia receives a special rate that is higher than its calculated rate, but it would have gained funding under SSBG because its population, as measured by the 2000 Census, was originally lower than the census population estimates. (For information on how these formulas are calculated, see app. I.) Using statistical population estimates to recalculate federal Medicaid allocations, states would have shifted 0.23 percent of $159.7 billion of federal Medicaid funds overall in fiscal year 2004 as a result of the simulation. If statistical population estimates had been used, of the overall allocation of $159.7 billion of federal funds, 22 states would have received more Medicaid funding, 17 states would have received less, while 11 states and the District of Columbia would have received the same. The gaining states would have received an additional $208.5 million, and the losing states would have received $368 million less in funding. Based on our simulation of the formula funding for Medicaid---Nevada would have gained 1.47 percent in grant funding and Wisconsin would have lost 1.46 percent. (Appendix IV contains tables showing the difference between using estimated and actual population data from the 1990 and 2000 Censuses to recalculate Medicaid allocations.) Figure 1 shows the state-by-state result—gain or loss--of recalculated Medicaid grant funding using the statistical population estimates. Most of the estimated increases in state allocations would have tended to congregate in the northwestern, southwestern, and southeastern regions of the country and Hawaii and Alaska. Most of the estimated decreases in state allocations would have tended to congregate in the northcentral region of the country. The southeastern and northeastern regions would have experienced both increases and decreases in funding and all southeastern states except Florida would have experienced increases. Figure 2 shows how much (as a percentage) and where Medicaid funding would have shifted as a result of using statistical population estimates for recalculating formula grant funding by state. We estimate that 20 states would have received an increase in allocations from more than 0 to less than 1 percent, while 2 states would have increased by more than 1 percent. Conversely, 7 states would have experienced a decrease in allocations of greater than one to less than 1.5 percent; 10 states’ allocations would have decreased by more than 0 to less than 1 percent; and 11 states and the District of Columbia would have experienced no change because the shift would have fallen below the floor and above the ceiling that are built into the FMAP formula. Using statistical population estimates to recalculate federal SSBG allocations, 0.25 percent of $1.7 billion in SSBG funds would have shifted in fiscal year 2005 as a result of the simulation. The total $1.7 billion SSBG allocation would not have changed, because SSBG receives a fixed annual appropriation. In other words, those states receiving additional funds would have reduced the funds of other states. In short, 27 states and the District of Columbia would have gained $4.2 million and 23 states would have lost a total of $4.2 million. Based on our simulation of the formula funding for SSBG, Washington, D.C. would have gained 2.05 percent in grant funding and Minnesota would have lost 1.17 percent. (Appendix IV contains tables showing the difference between using estimated and actual population data from the 1990 and 2000 Censuses for SSBG funding.) Figure 3 shows the state-by-state result—gain or loss—of recalculated SSBG grant funding using statistical population estimates. Because the reallocations are based on the same census statistical population estimates as the Medicaid estimated reallocations, most of the estimated increases in state allocations would have tended to congregate in the southeastern, southwestern, and northwestern regions of the country, as they did in our Medicaid simulation. The estimated decreases would have been grouped in the northcentral region and several states of the northeastern region of the country. The northeastern region would also have experienced both increases and decreases in funding. Wyo. Mich. R.I. Conn. Nebr. Pa. N.J. Ill. Ind. Del. Colo. Kans. Mo. W.Va. Va. Md. Ky. D.C. N.C. Okla. Tenn. N.Mex. Ark. S.C. Miss. Ala. Ga. Tex. La. Fla. Figure 4 shows how much (as a percentage) and where SSBG funding would have shifted as a result of using statistical population estimates for recalculating formula grant funding by state. By recalculating SSBG state allocations using the statistical population estimates for states based on 2003 Census population numbers, we estimate that 27 states would have experienced an increase from more than 0 to less than 1 percent; the District of Columbia would have increased by more than 2 percent; 2 states’ allocations would have decreased by more than one percent; and 21 states’ allocations would have decreased by more than 0 to less than 1 percent. For the Medicaid program, recalculating state allocations using statistical population estimates based on the 2000 Census would have changed the funding for 39 states in fiscal year 2004. In particular, 22 states would have increased their allocations by $208.5 million, 17 states would have decreased them by $368.0 million, and 11 states and the District of Columbia would have had no change. By contrast, recalculating state allocations using statistical population estimates based on the 1990 Census, the number of changing states would have remained the same but the amounts shifting among the states would have changed in fiscal year 1997. Table 2 presents the comparative information from the two analyses. The allocations for the gaining states would have decreased by almost 50 percent, from $402.4 million for the 1990 Census to $208.5 million for the 2000 Census, while the allocations for the losing states would have increased by 7 percent, from $344.6 million to $368.0 million. While total allocations under the Medicaid program increased by over 75 percent from fiscal year 1997 to fiscal year 2004, the relative or percentage change in state funding would have decreased in our simulation of recalculations of state allocations using statistical population estimates. We have a similar finding for the SSBG program. Our recalculation of state allocations would have resulted in a smaller change in allocations when we compare the results of our recalculation using statistical population estimates based on the 2000 Census to the results based on the 1990 Census. The change in funding would have been reduced by half using the statistical population estimates based on the 2000 Census. Total SSBG state allocations decreased by 26 percent between fiscal year 1998 and fiscal year 2005, and the percentage shift in funding would also have been reduced, from 0.37 percent to 0.25, using the statistical population estimates based on the 2000 Census. In summary, using the statistical population estimates based on the 2000 Census to recalculate Medicaid and SSBG allocations would have resulted in a smaller shift in program funding than using the statistical population estimates based on the 1990 Census. This is because the difference between the actual and estimated population counts was smaller for the 2000 Census compared to the 1990 Census. As mentioned earlier, the recalculated allocations are the result of simulations using statistical population estimates and were done for the purpose of illustrating the sensitivity of these two formula grant programs to alternative population estimates. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its issuance date. At that time we will send copies of the report to other interested congressional committees, the Secretary of Commerce, the Secretary of Health and Human Services, the Director of the U.S. Census Bureau, and the Director of the Office of Management and Budget. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-6806 or by email at [email protected]. GAO staff who made major contributions to this report are listed in appendix VI. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. As agreed with your offices, we identified (1) the top 20 formula grant programs based on the amount of funds targeted by any means, and (2) how much money would have been allocated using census data for certain formula grant programs, and the prospective impact of using estimated population counts from the 1990 and 2000 Censuses to recalculate state allocations for these grant programs. We use the term “allocation” to include Department of Health and Human Services (HHS) reimbursement to states of Medicaid expenditures subject to the Federal Medical Assistance Percentage (FMAP) formula and Social Services Block Grant (SSBG) state allotments. We use the term “statistical population estimates” to refer to the results of the coverage measurement programs that the Census Bureau (Bureau) conducted following the 1990 and 2000 Censuses. To identify the top 20 formula grant programs based on the amount of funds targeted by any means, we used fiscal year 2004 grants expenditure and obligations data from the Bureau’s Consolidated Federal Funds Report (CFFR), the most recent data available at the time of our review. While we recently reported on inaccuracies in the CFFR, we determined that the CFFR is adequate for purposes of identifying the top 20 federal formula grant programs because it shows the overall magnitude of these programs. Because the CFFR lists direct expenditures or obligations, the amount shown for Medicaid in table 1 is different from the Medicaid allocations shown in the rest of the report, where we use state expenditure data subject to the FMAP formula, which exclude administrative costs. Administrative costs for which Medicaid reimburses states include nine broad tasks: (1) inform potentially eligible individuals and enroll those who are eligible, (2) determine what benefits it will cover in what settings, (3) determine how much it will pay for the benefits it covers and from whom to buy those services, (4) set standards for providers and managed care plans from which it will buy covered benefits and contract with those who meet the standards, (5) process and make payments to service providers, (6) monitor the quality of services to beneficiaries, (7) ensure that state and federal health care funds are not spent improperly or fraudulently, (8) have a process for resolving grievances, and (9) collect and report information for effective administration and program accountability. To determine how much money was allocated using census population counts for Medicaid and SSBG, we obtained population and income data from the Department of Commerce (Commerce). Additionally, we obtained Medicaid expenditures, SSBG allocations, and certain other information from HHS. Table 3 displays the census population counts for 1990 and 2000 and their statistical estimates. We obtained state per capita income—the ratio of personal income to population—for 2000, 2001, and 2002 from Commerce and replicated the actual FMAP for 2005 using fiscal year 2004 state expenditure data. For the SSBG state allocation formula, we obtained state population estimates for 2003 and replicated the SSBG allocations for 2005. The official 1990 Census population counts and statistical population estimates from the 1990 coverage measurement program known as the Post-Enumeration Survey (PES) come from our earlier report. To analyze the prospective impact of estimated population counts on the money allocated to the states through these two grant programs, we recalculated the state allocations using statistical estimates of the population that were developed for the 1990 and 2000 Censuses in lieu of the actual census numbers. We used the population estimates, which are based on the 2000 Census counts, and then adjusted these population estimates by the difference between the 2000 official population counts and the statistical estimates of the population (A.C.E.). Our procedure to simulate the formula allocations using adjusted counts was to (a) obtain the population estimates used to calculate the Medicaid FMAP and SSBG allocations, (b) subtract the A.C.E. population estimates from the official 2000 Census population counts, and (c) add the difference from (b) to the population estimates from (a). We included the 50 states and the District of Columbia in our calculations, but did not include the territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the Virgin Islands, because their allocations use formulas that are different from those used by the 50 states we analyzed. To verify our approach, we spoke with Department of Commerce and Department of Health and Human Services officials who administer these grant programs about the procedures they use to calculate the formula funding amounts. Importantly, our analyses of Medicaid and SSBG are simulations and were conducted only to illustrate the sensitivity of these two grant programs to alternative population estimates. Both the Census Bureau and GAO deem the 1990 and 2000 statistical population estimates as unreliable and they should not be used for any purposes that legally require data from the decennial census. Medicaid is an entitlement program. The federal share of total Medicaid program costs is determined using the FMAP, a statutory formula that calculates the portion of each state’s Medicaid expenditures that the federal government will pay. Our Medicaid simulation uses the fiscal year 2005 FMAP, which applies 2001 through 2002 personal income and population data, and fiscal year 2004 expenditure data. The formula calculates the federal matching rate for each state on the basis of its per capita income (PCI) in relation to national PCI. States with a low PCI receive a higher federal matching rate, and states with a high PCI receive a lower rate. If applying the formula renders a state’s reimbursement less than 50 percent of its allowable expenditures, the state is still entitled to be reimbursed for a minimum of 50 percent—or “floor”—of what it spent. Conversely, a state cannot be reimbursed for more than 83 percent of allowable expenditures—the “ceiling.” Thus, if one used the A.C.E. statistical estimates to recalculate state Medicaid allocations, states’ reimbursements for allowable expenditures would not be less than 50 percent, the “floor,” or more than the “ceiling.” Our calculations do not include administrative costs, because they are not subject to the FMAP formula. The Medicaid data we used in our calculations include the Indian Health and the Family Planning programs, which are not subject to the allocation formula. Agency officials told us that the expenditures for these two programs are so small in relation to the total Medicaid expenditures that they do not materially affect the calculations of state allocations subject to the FMAP formula. The SSBG federal grant is for a fixed amount determined in an annual appropriation, and its formula is set up so that an increase in funding to any state is offset by a decrease to others. To estimate the prospective impact of using statistical population estimates to recalculate allocations for SSBG, we used 2003 population data adjusted by the difference between the 2000 Census and the A.C.E. estimates and fiscal year 2005 allocations to the states for our analysis—the data HHS used in its fiscal year 2005 grant allocations to the states. Unlike Medicaid, SSBG includes administrative costs in its population-based formula to calculate state allocations. Program Objectives: To provide financial assistance to states for payment of medical care on behalf of cash assistance recipients, children, pregnant women, and the aged who meet income and resource requirements and other categorically eligible groups. Federal Agency: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services. Fiscal Year 2004 Obligations: $183.2 billion. (Federal allocations excluding administrative costs: $159.7 billion.) Formula Calculation: Eligible medical expenses are reimbursed based on the per capita income of the state. The federal reimbursement rate, known as the Federal Medical Assistance Percentage (FMAP), ranges from a minimum of 50 percent to a maximum of 83 percent. Most administrative expenses are reimbursed at a flat rate of 50 percent but may be as high as 100 percent as is the case with immigration status verification. Formula Constraints: No state may receive a matching percentage below 50 percent or in excess of 83 percent. FMAP = Federal Medical Assistance Percentage. PCI = Per capita personal income. PI = Personal income. Pop = State population. Pop: Department of Commerce, Bureau of Economic Analysis, and Census Bureau. Amount Shifted: $368 million, or a 0.23 percent overall loss of the total $159.7 billion allocated among the states as a result of the simulation. Comments: Allotment amounts were calculated for fiscal year 2004, the latest year for which data were available. Total federal allotment includes some amounts for Family Planning and Indian Health Services that are not subject to the FMAP. We use the term “allocation” to include HHS reimbursement to states of Medicaid expenditures subject to the federal FMAP formula (net of administrative costs). Program Objectives: To enable states to provide social services directed toward the following goals: (1) reducing dependency; (2) promoting self- sufficiency; (3) preventing neglect, abuse, or exploitation of children and adults; (4) preventing or reducing inappropriate institutional care; and (5) securing admission or referral for institutional care when other forms of care are not appropriate. Federal Agency: Department of Health and Human Services, Administration for Children and Families. Fiscal Year 2004 Obligations: $1.7 billion. Formula Calculation: State funding is allocated in proportion to each state’s share of the national population. Formula Constraints: None. Amt = Funds available for allocation to states. Pop = A state’s population count. Amt: Department of Health and Human Services, Administration for Children and Families. Pop: Department of Commerce, Census Bureau. Amount Shifted: $4.2 million, or 0.25 percent of the total $1.7 billion allocated. The Social Services Block Grant (SSBG) federal grant is for a fixed amount determined in an annual appropriation; an increase in funding to any state is offset by a decrease in others. Comment: We use the term “allocation” to include SSBG state allotments. SSBG state allotments are based on each state’s population in proportion to the total U.S. population. In addition to the individual named above, Robert Goldenkoff, Assistant Director, as well as Faisal Amin, Robert Dinkelmeyer, Carlos Diz, Gregory Dybalski, Amy Friedlander, and Sonya Phillips made key contributions to this report.
Decennial census data need to be as accurate as possible because the population counts are used for, among other purposes, allocating federal grants to states and local governments. The U.S. Census Bureau (Bureau) used statistical methods to estimate the accuracy of 1990 and 2000 Census data. Because the Bureau considered the estimates unreliable due to methodological uncertainties, they were not used to adjust the census results. Still, a key question is how sensitive are federal formula grants to alternative population estimates, such as those derived from statistical methods? GAO was asked to identify (1) the top 20 formula grant programs based on the amount of funds targeted by any means, and (2) the amount of money allocated for Medicaid and Social Services Block Grant (SSBG), and the prospective impact of estimated population counts from the 1990 and 2000 Censuses on state allocations for these two programs. Importantly, as agreed, GAO's analysis only simulates the formula grant reallocations. We used fiscal year 2004 Medicaid state expenditure and 2005 SSBG state allocation data, the most recent data available. In fiscal year 2004, the top 20 formula grant programs together had $308 billion in obligations, or 67 percent of the total $460.2 billion obligated by the 1,172 federal grant programs. Medicaid was the largest formula grant program, with obligations of $183.2 billion, or nearly 40 percent of all grant obligations. The federal government allocated $159.7 billion to states in Medicaid funds (not including administrative costs such as processing and making payments to service providers) and $1.7 billion in SSBG funds. Recalculating these allocations using statistical population estimates from the Accuracy and Coverage Evaluation and the Post Enumeration Survey--independent sample surveys designed to estimate the number of people that were over- and undercounted in the 2000 and 1990 Censuses--would have produced the following results. First, a total of 0.23 percent ($368 million) of federal Medicaid funds would have been shifted overall among the states in fiscal year 2004 and 0.25 percent ($4.2 million) of SSBG funds would have shifted among the states in fiscal year 2005 as a result of the simulations using statistical population estimates from the 2000 Census. Second, with respect to Medicaid, 22 states would have received additional funding, 17 states would have received less funding, and 11 states and the District of Columbia would have received the same amount of funding using statistical population estimates from the 2000 Census. Based on a fiscal year 2004 federal Medicaid allocation to the states of $159.7 billion, Nevada would have been the largest percentage gainer, with an additional 1.47 percent in funding, and Wisconsin would have lost the greatest percentage--1.46 percent. Third, with respect to SSBG, 27 states and the District of Columbia would have gained funding, and 23 states would have lost funding using statistical population estimates from the 2000 Census. Based on a fiscal year 2005 SSBG allocation of $1.7 billion, Washington, D.C. would have been the biggest percentage gainer, receiving an additional 2.05 percent in funding, while Minnesota would have lost the greatest percentage funding--1.17 percent. Fourth, statistical population estimates from the 2000 Census would have shifted a smaller percentage of funding compared to those using the 1990 Census because the difference between the actual and estimated population counts was smaller in 2000 compared to 1990.
Since 1974, the SSI program, under title XVI of the Social Security Act, has provided benefits to low-income blind and disabled persons—adults and children—who meet financial eligibility requirements and SSA’s definition of disability. SSA determines applicants’ financial eligibility; state DDSs determine their medical eligibility. DDSs are state agencies that are funded and overseen by SSA. To meet the financial test, children must be in families with limited incomes and assets. In 1994, children’s federally administered SSI payments totaled $4.52 billion. Depending on the family’s income, an eligible child can receive up to $458 per month in federal benefits; 27 states also offer a supplemental benefit payment. Because SSI is an individual entitlement, no family cap exists on the amount of benefits received in a household. With SSI eligibility usually come other in-kind benefits, most notably Medicaid and Food Stamps. “to engage in any substantial gainful activity by reason of any medically determinable physical or mental impairment which can be expected to last a continuous period of not less than twelve months.” Because children are not expected to work, however, this definition is not applicable to measure disability in children. At a DDS, childhood disability determinations are made by an adjudication team consisting of an examiner and a medical consultant. For mental impairments, the consultant must be a psychiatrist or child psychologist. The examiner collects all medical evidence—physical and mental—either from medical sources who have treated the applicant or from an independent consultant if more medical information is needed. The examiner supplements the medical information with accounts of the child’s behavior and activities from the child’s teachers, parents, and others knowledgeable about the child’s day-to-day functioning. Working together, the DDS adjudication team determines whether the applicant’s medical condition matches or is equivalent to an impairment found in SSA’s listing of medical impairments. If so, benefits are awarded. If, however, the applicant’s condition is not severe enough to meet or equal the severity criteria in SSA’s medical listings, the team uses the evidence to perform an IFA. If the IFA shows the child’s impairment substantially reduces his or her ability to function age-appropriately, benefits are awarded. If not, a denial notice is issued, and applicants are informed of their appeal rights. During a 2-month period, SSA issued two sets of new regulations that significantly changed the criteria for determining children’s eligibility for SSI disability benefits. One set of regulations, issued in accordance with the Disability Benefits Reform Act of 1984 (DBRA), revised and expanded SSA’s medical listings for evaluating mental impairments in children to incorporate recent advances in medicine and science. The second set of regulations was issued in response to the Sullivan v. Zebley Supreme Court decision, which required SSA to make its process for determining disability in children analogous to the adult process. Both sets of regulations placed more emphasis on assessing how children’s impairments limit their ability to act and behave like unimpaired children of similar age. Both also emphasize the importance of obtaining evidence from nonmedical sources as part of this assessment. SSA issued new regulations in accordance with DBRA on December 12, 1990. These new regulations revised and expanded SSA’s medical listings for childhood mental impairments to reflect up-to-date terminology used by mental health professionals and recent advances in the knowledge, treatment, and methods of evaluating mental disorders in children. The new medical listings for mental impairments provided much more detailed and specific guidance on how to evaluate mental disorders in children than the former regulations, which were published in 1977. In particular, the new medical listings placed much more emphasis on assessing how a child’s mental impairment limits his or her ability to function in age-appropriate ways. SSA made this change because mental health professionals consider functional factors particularly important in evaluating the mental disorders of children. The former medical listings for mental impairments emphasized the medical characteristics that must be met to substantiate the existence of the impairment. Specific areas of functioning sometimes were and sometimes were not mentioned as a factor in this determination. In contrast, the new medical listings provide much more detailed guidance on assessing the functional aspects of each impairment. The standard for most impairments is divided into two parts: medical and functional criteria, both of which must be satisfied for the applicant to qualify for a benefit. The functional criteria are described in terms of the age of the child and the specific areas of functioning—such as social, communication/ cognition, or personal/behavioral skills—that must be assessed. The new medical listings emphasize the importance of parents and others as sources of nonmedical information about a child’s day-to-day functioning. In general, the childhood mental listings require children over 2 years old to have marked limitations in two of the four areas of functioning to qualify for benefits. Further, when standardized tests are available, the listing defines the term “marked” as a level of functioning that is two standard deviations below the mean for children of similar age. The new medical listings also classified childhood mental disorders into more distinct categories of mental impairments. Previously, there were 4 impairments listed—mental retardation, chronic brain syndrome, psychosis of infancy and childhood, and functional nonpsychotic disorders—now there are 11. Several of the newly listed impairments, such as autism and other pervasive developmental disorders, mood disorders, and personality disorders, describe impairments that were previously evaluated under one or more of the four broader categories of childhood mental impairments. Several other impairments are mentioned for the first time, such as attention deficit hyperactivity disorder and psychoactive substance dependence disorders. “does not account for all impairments ’of comparable severity’ , and denies child claimants the individualized functional assessment that the statutory standard requires . . . .” To determine adults’ eligibility for disability benefits, SSA uses a five-step sequential evaluation process. Before Zebley, it used only a two-step process to determine children’s eligibility for benefits. (See fig. 1.) Children were awarded benefits only if their impairments met or equaled the severity criteria in SSA’s medical listings. All other children were denied benefits. In contrast, adults whose conditions were not severe enough to qualify under the medical listings could still be found eligible for benefits if an assessment of their residual functional capacity (RFC) showed that they could not engage in substantial work. No analogous assessment of functioning was done for children who did not qualify under the medical listings. “an inquiry into the impact of an impairment on the normal daily activities of a child of the claimant’s age—speaking, walking, dressing and feeding oneself, going to school, playing, etc.” Although the Court required the functional assessment, it did not define the degree of limitation necessary to qualify for benefits, except by analogy to the adult definition of disability. To implement the Zebley decision, SSA convened a group of experts in April 1990 to help formulate new regulations using age-appropriate functional criteria. Included were experts in general and developmental pediatrics, child psychology, learning disorders, and early and adolescent childhood education as well as advocates from groups such as Community Legal Services in Philadelphia (plaintiff’s counsel in the Zebley case), the Association for Retarded Citizens, and the Mental Health Law Project. SSA also consulted with its regional offices and the state DDSs. Building on the functional criteria added to the listings after DBRA, SSA issued regulations implementing the Supreme Court’s decision on February 11, 1991. According to these regulations, for the child to be eligible for disability benefits, the IFA must show that the child’s impairment or combination of impairments limits his or her ability “to function independently, appropriately, and effectively in an age-appropriate manner.” Specifically, the impairment must substantially reduce the child’s ability to grow, develop, or mature physically, mentally, or emotionally to the extent that it limits his or her ability to (1) attain age-appropriate developmental milestones; (2) attain age-appropriate daily activities at home, school, play, or work; or (3) acquire the skills needed to assume adult roles. Although SSA officials describe these as state-of-the-art criteria for assessing children’s functioning, they concede that many of these concepts are not clear cut. As a result of these regulations, DDSs now perform IFAs to assess the child’s social, communication, cognitive, personal and behavioral, and motor skills, as well as his or her responsiveness to stimuli and ability to concentrate, persist at tasks at hand, and keep pace. Like the DBRA regulations, the IFA process requires DDSs to supplement medical information with information about the child’s behavior and activities from the child’s teachers, parents, and others knowledgeable about the child’s day-to-day functioning in order to make these assessments. Generally, if the IFA shows that a child has a moderate limitation in three areas of functioning or a marked limitation in one area and a moderate limitation in another, benefits are awarded. In contrast, the more restrictive functional criteria under SSA’s mental listings require two marked limitations. In addition to measuring functioning as part of the IFA process, the Zebley regulations added the concept of functional equivalence to SSA’s medical listings. Before Zebley, a child qualified for benefits only if his or her impairment met or was medically equivalent to the severity criteria in the listings. After Zebley, a child could qualify if his or her impairment was functionally equivalent to an impairment in the medical listings, as long as there was a direct, medically determinable cause of the functional limitations. The regulations provide 15 examples of conditions—such as the need for a major organ transplant—presumed to be functionally equivalent to the listed impairments. Of the 646,000 children added to the SSI rolls from February 1991 through September 1994, about 219,000 (one-third) were awarded benefits based on the less restrictive IFA process. If all 219,000 children receive the maximum benefit, their SSI benefits would cost about $1 billion a year. About 84 percent of these children had a mental impairment as their primary limitation, and about 16 percent had physical impairments. (Fig. 2 shows a breakdown of the impairments.) Figure 3 shows the substantial increase in the number of awards. Much of this increase was due to the implementation of new medical listings for mental impairments. The IFA process also added to the growth in the rolls and accounted for a substantial portion of new awards. Figure 3 also shows that the average monthly number of applications jumped dramatically after Zebley and has continued to grow. Many observers attribute this increase in applications to the publicity surrounding Zebley, as well as to increased outreach by SSA, some of which was congressionally mandated. Also, some of the increase in awards may have been attributable to the close scrutiny of the IFA process by courts and disabled child advocates, which some believe may have resulted in some DDSs feeling pressured to increase their award rates during the 1991-1992 period. (App. II provides a chronology of their actions.) Before the IFA process was introduced in 1991, the national award rate for all types of childhood cases was 38 percent, but the award rate jumped to 56 percent in the first 2 years after the IFA and DBRA regulations were issued. More recently, during 1993 and 1994, the award rate has dropped dramatically. The national award rate for 1994 was 32 percent—lower than it was in the 2 years before Zebley. Our review indicates that the IFA process has been difficult to implement consistently and reliably, particularly for children with mental impairments, because the process requires adjudicators to make a series of judgment calls in a complex matrix of assessments about age-appropriateness of behavior. SSA and IG studies of children with mental impairments have borne out these difficulties. Although SSA has tried to add rigor to the IFA process through guidance and training, we believe that problems will likely continue because of the difficulties inherent in using age-appropriate behavior as an analog for the adult vocational assessment of residual functional capacity. Determining disability for children with impairments that are not severe enough to match a listed impairment can be a highly subjective process. SSA designed the IFA process to provide DDS adjudicators with a structure to help them make uniform and rational disability determinations for children with less severe impairments. Even so, the necessity to assess a child’s ability to function age-appropriately requires DDS adjudicators to make a series of judgments, which we believe raises questions about the consistency and reliability of DDS decisions. SSA and IG studies and our analysis document problems throughout the IFA process, especially for mental impairments. (See app. III for a more detailed discussion of the problems that SSA and the IG identified.) Extensive evidence needed: To make disability determinations, DDSs use information from both medical and nonmedical sources, including teachers, day care providers, parents, and others knowledgeable about the child’s day-to-day behavior and activities. For the functional assessment, observations are needed about the child’s behavior over a long period of time, so evidence-gathering can be a considerable task. SSA found in its 1994 study that the lack of sufficient supporting documentation was the most common problem in its sample of childhood disability decisions. School officials in particular are an important source of nonmedical data on children’s behavior over time. Each DDS develops its own questionnaires for eliciting the data, and inquiries are made on virtually every applicant because this information is also used to assess functioning under the medical listings. We estimate that the process now results in about 500,000 inquiries to schools each year, a substantial reporting burden. Some parties believe that the open-ended questionnaire design in many states and the burden on school officials faced with many inquiries may be contributing to poor quality data from this key source. Difficulty classifying limitations: If an IFA is needed, a disability adjudicator must classify the child’s limitations in the appropriate areas of functioning, as shown in figure 4. This is a complex judgment because some areas are closely interrelated and impairments may or may not affect functioning in more than one area. If, for example, evidence indicates that a child gets in fights at school, the adjudicator must determine whether the specific behavior is evidence of a limitation in social skills, personal and behavioral skills, or some combination of these. SSA found that in cases of incorrect awards a common mistake that adjudicators made was to count the effect of an impairment in two areas when only one was appropriate. This resulted in the impairment seeming more severe than it actually was. Problems defining degrees of limitation: Once the areas have been identified, the adjudicator must judge the degree of limitation. Because only certain conditions—such as low intelligence quotient (IQ)—can be objectively tested and determined, SSA has defined the severity of limitations by comparison with expected behavior for the child’s chronological age. Figure 4 shows the degrees of limitation adjudicators use to assess children 3 through 15 years old. SSA’s guidance defines a limitation in the moderate category as more than a mild or minimal limitation but less than a marked limitation. The terms “mild” and “minimal” are not defined, but SSA guidance describes an impairment in the marked category as one that “seriously” interferes with a child’s ability to function age-appropriately, while a moderate limitation creates “considerable” interference. Within each category, adjudicators are expected to be able to differentiate the degree of limitation. For example, a moderate rating can range from a “weak moderate” (just above a less-than-moderate) up to a “strong moderate” (just below a marked limitation). Limited guidance for summing the result: Because the IFA process is inherently subjective, SSA cannot provide an objective procedure for summarizing the IFA results. Therefore, SSA instructs adjudicators to step back and assess whether the child meets the overall definition of disability. As an example to guide adjudicators, SSA has said that an award may generally be granted if a child has a moderate limitation in three areas. However, SSA officials stress that this statement assumes “three good, solid moderates,” and they characterize it as a general guideline, not a firm rule. Also, they stress that other possible combinations of ratings, such as two strong moderates, could justify finding a child disabled, depending on the individual child’s circumstances. In the end, officials stress that adjudicators are expected to award or deny benefits based on an overall judgment, not on any specific sum of severity ratings. SSA’s 1994 study of 325 childhood awards highlighted the difficulties in using the IFA process to reliably identify disabled children, particularly children with behavioral and learning disorders. In the study, SSA’s Office of Disability selected cases of 325 children with behavioral and learning disorders who had been found eligible. The majority were found eligible based on IFAs. These cases had been decided by DDS adjudicators, based on their understanding of existing guidance from SSA. Then, SSA’s regional quality assurance staff had reviewed the decisions and found them accurate. The study involved a third group of experts in the Office of Disability who reviewed the same cases and found inaccuracies in the decisions. Based on their findings, we concluded that about 13 percent of the awards reviewed by SSA had been made to children who were not impaired enough to qualify. Also, another 23 percent of the awards had been made without sufficient supporting documentation. A January 1995 IG report focused on IFA-based awards to children with mental impairments. IG staff, with assistance from the Office of Disability, reviewed 129 IFA-based awards for mental retardation, attention deficit hyperactivity disorder, and other behavioral or learning disorders. The IG found that 17 (13 percent) of the awards should have been denials and another 38 (29 percent) had been based on insufficient evidence. The IG attributed this to DDS adjudicators’ difficulty interpreting and complying with SSA’s IFA guidelines for assessing the severity of children’s mental impairments. Many adjudicators reported that they found the SSA guidelines unclear and not sufficiently objective. The IG stated that this group of children had less severe impairments than those children determined disabled based on the medical listings, making the assessment of their impairments’ effect on their ability to function age-appropriately more difficult. We observed firsthand the difficulty that adjudicators face in making the judgments required by the IFA process for children who have behavioral and learning disorders. In June 1994, we attended 1-day training sessions for DDS adjudicators and SSA’s regional quality assurance staff from across the nation. The Office of Disability presented the findings from its 1994 study and discussed the policies and procedures that DDS and quality assurance staff had misapplied. In this training, Office of Disability staff presented case studies of children included in the 1994 study. After those in attendance reviewed the evidence for each child’s case, they were asked to assess the degree to which the child’s impairment limited his or her functioning. The attendees’ opinions were tallied and in all cases they were split. During discussions of each case, attendees often voiced differing views on why they believed, for example, that the child’s limitation was less than moderate or moderate, or whether a moderate limitation was a good, solid moderate, or a weak moderate. In some cases, the opinion of the majority of attendees turned out to be different from the conclusion of the Office of Disability. In addition to the national training in June 1994, SSA took other steps to correct implementation problems, including (1) issuing numerous instructional clarifications and reminders, (2) requiring DDSs to specially code certain types of mental impairments and all decisions based on three moderate limitations (to facilitate selecting samples of cases for further study), and (3) establishing more rigorous requirements for documenting awards that are based on three moderate limitations. The Office of Disability plans to do a follow-up study to assess the effectiveness of its remedial efforts. Some experts believe that further steps could be taken to improve the IFA process. For example, experts we contacted commented on the need for more complete longitudinal evaluations by professionals. They pointed out that more complete examinations—sometimes including multiple visits and observations of both parents and children—would help to address concerns about the adequacy of information from schools and medical sources and provide higher assurance of good decisions. They stated that because professionals are trained to identify malingering in mental examinations, the expanded examinations might also help relieve concerns about coaching. They agreed that such examinations would raise the program’s administrative costs considerably, but because a child can receive almost $5,500 a year in benefits (which can continue for life) they believed that the costs would be justified. SSA’s efforts and experts’ suggestions are geared toward improving the process rather than addressing the underlying conceptual problems with the IFA. The difficulties so far in implementing the IFA bring into question whether these types of incremental actions can ensure consistently accurate decisions for children with mental impairments, especially behavioral and learning disorders. The rapid growth in awards to children with mental impairments—particularly behavioral and learning disorders—has contributed to the public perception that the SSI program for children is vulnerable to fraud and abuse. The media have reported allegations that parents coach their children to fake mental impairments by misbehaving or performing poorly in school so that they can qualify for SSI benefits. Critics believe that cash payments and Medicaid act as incentives for some parents to coach and, therefore, they are concerned about the extent to which parents can manipulate the disability determination process. However, we believe that measuring the extent to which coaching may actually occur is extremely difficult. Unless parents admit to it, coaching is almost impossible to substantiate. The nature of the parent-child relationship makes investigating coaching allegations difficult. Many communications between parent and child take place at home, out of the view of outside observers. In addition, the variability of children’s behavior makes knowing whether a child’s behavior is the result of coaching difficult. Behavior can vary naturally among children of the same age—or in the same child over time—as they go through stages in development or respond to changes in their home or school environment. If a child started misbehaving in school, investigators would need baseline evidence to establish that the child had not misbehaved extensively in the past. Finally, even if investigators could identify a sudden change in behavior, they would have to rule out other reasons for the change, such as changes in the child’s household or neighborhood environment. In short, knowing whether the child is performing poorly or misbehaving because of coaching or for other reasons is difficult. Because coaching is difficult to detect, the extent of coaching cannot be measured with much confidence. In recent studies, SSA and the HHS IG reviewed case files and identified scant evidence of coaching or malingering. In the rare instances where they found evidence of possible coaching or malingering, most of the claimants had been denied benefits anyway. (App. III summarizes the results of the SSA and IG studies, including their scopes and methodologies.) To protect program integrity, SSA has taken several steps to help provide assurance that the process can detect coaching or malingering and then make the appropriate eligibility determination. In June 1994, SSA began requiring DDSs to report to SSA’s regional quality assurance units any case with an allegation or suspicion of coaching. Such cases include those in which teachers, physicians, or psychologists indicate that (1) the child’s behavior was atypical of the child’s customary school behavior, (2) the child was uncooperative during testing, or (3) the child’s behavior deteriorated without explanation during the 6-month period preceding the application. According to SSA, its regional quality assurance units review all alleged cases of coaching. As of mid-January 1995, DDSs nationwide had reported alleged coaching in 674 childhood cases—or less than one-half of 1 percent of all childhood applications filed during the period—and fewer than 50 of these children had been awarded benefits. Along with this new requirement, in August 1994, SSA required DDSs to send applicants’ schools a set of questions specifically designed to elicit the teacher’s views on whether the child had been coached. Additionally, each SSA regional office has established toll-free telephone numbers for the exclusive use of teachers and school officials to notify the regional quality assurance unit of coaching allegations. In mid-November 1994, SSA instructed DDSs to begin distributing these toll-free numbers to schools. Also, SSA has instructed its field offices and telephone service centers to report to the regional quality assurance units any allegations of coaching received from the general public. As of mid-January 1995, from all of these sources, SSA had received a total of 42 telephone calls with allegations of coaching involving 54 individuals. According to SSA, each allegation from teachers, school officials, or the general public is reviewed if the child was awarded benefits. Childhood disability decisions based on the IFA process are among the toughest that DDSs must make. Particularly in assessing behavioral and learning disabilities, the level of judgment required makes the IFA process difficult to administer consistently. Moreover, the high level of subjectivity leaves the process susceptible to manipulation and the consequent appearance that children can fake mental impairments to qualify for benefits. Indeed, the rise in allegations of coaching may reflect public suspicion of a process that has allowed many children with less severe impairments to qualify for benefits. Although scant evidence exists to substantiate that coaching is a problem, coaching cannot be ruled out and its extent is virtually unmeasurable. We believe that a more fundamental problem than coaching is determining which children are eligible for benefits using the new IFA process. Our analysis documents the many subjective judgments built into each step of the IFA process to assess where a child’s behavior falls along the continuum of age-appropriate functioning. Moreover, studies by SSA and the IG of children awarded benefits for behavioral and learning disorders illustrate the difficulties that SSA has experienced over the last 4 years in making definitive and consistent eligibility decisions for children with these disorders. SSA’s efforts have been aimed at process improvements rather than reexamining the conceptual basis for the IFA. Despite its efforts, too much adjudicator judgment remains. Although better evidence and more use of objective tests where possible would improve the process, the likelihood of significantly reducing judgment involved in deciding whether a child qualifies for benefits under the IFA is remote. We believe that more consistent decisions could be made if adjudicators based functional assessments of children on the functional criteria in SSA’s medical listings. This change would reduce the growth in awards and target disability benefits toward children with more severe impairments. Given widespread concern about growth in the SSI program for children and in light of our findings about the subjective nature of the IFA process, the Congress could take action to improve eligibility determinations for children with disabilities. One option the Congress could consider is to eliminate the IFA, which would require amending the statute. The Congress could then direct SSA to revise its medical listings, including the functional criteria, so that all children receive functional assessments based on these revised criteria. We did not request official agency comments from SSA on a draft of this report. However, we discussed the draft with SSA program officials, who generally agreed that we had accurately characterized the IFA process and the results of studies. SSA officials had some technical comments, which we have incorporated where appropriate. Please contact me on (202) 512-7215 if you have any questions about this report. Other major contributors are Cynthia Bascetta, Ira Spears, Ken Daniell, David Fiske, and Ellen Habenicht. To develop the information in this report, we (1) reviewed SSA’s childhood disability program policies, procedures, and records, and discussed the IFA process with SSA program officials on the national, regional, and local level; (2) interviewed officials in state DDSs; (3) reviewed SSA’s report on its 1994 study of children with behavioral and learning disorders; and (4) attended a June 1994 SSA training course that was based on findings from its study. We also discussed eligibility issues with officials of HHS’ IG, which recently issued two reports on the SSI childhood disability program. To develop SSI childhood program award rate data, we obtained SSA’s computerized records on the results of initial determinations and reconsideration disability decisions made by DDSs for children under 18 years old from 1988 through September 1994. These records exclude the results of disability decisions made by administrative law judges. From these records, we determined (1) the overall award rate for children, (2) the percentage of IFA awards that were based on mental impairments versus physical impairments, (3) the average monthly number of childhood applications, and (4) the average monthly number of awards that were based on IFAs versus medical listings. These data, as applicable, were determined for the following periods: (1) 2 years before the Supreme Court’s Sullivan v. Zebley decision (Jan. 1, 1988, through Feb. 20, 1990); (2) 2 years after the IFA process was implemented (Feb. 11, 1991, through Dec. 31, 1992); (3) January-December 1993; and (4) January- September 1994. Because no IFA process existed before the Zebley decision, no pre-Zebley awards were decided based on IFAs. We excluded children who had applied during 1988 through February 10, 1991, from the universe of children on whom decisions were made from February 11, 1991, through September 30, 1994. We did this to minimize the extent to which data in these comparison periods reflect the result of cases readjudicated as part of the settlement in the Zebley class action lawsuit. We were not able to identify or exclude Zebley classmembers for whom benefits had been denied or terminated from 1980 through 1987 from any of the comparison periods. According to SSA, Zebley classmembers are more likely to have physical impairments than the general population of new SSI child applicants. We performed our work from May 1994 through February 1995 in accordance with generally accepted government auditing standards. One month before SSA issued regulations implementing the new IFA process, the Zebley plaintiff’s counsel submitted interrogatories to SSA asking, among other things, why nine DDSs with the lowest award rates for children had such low award rates. SSA regional officials were tasked with answering some of the counsel’s interrogatories and, in some instances, the officials informed the states that they were the subject of the counsel’s inquiry. Also, from time to time thereafter, SSA officials shared state-by-state award rate data with state DDSs. Some SSA regional officials stated that they believed some DDSs could have felt pressured to increase their award rates. In the month that SSA issued regulations implementing the new IFA process, a federal district court ordered SSA to perform special quality assurance reviews of disability applications denied under the new regulations. The court order required SSA to do quality assurance reviews of denials made by 10 state DDSs that, according to SSA, Zebley plaintiff’s counsel had identified as denial prone due to their low award rates. Based on its own studies, SSA had argued before the court that low award rates were not reliable indicators of whether special corrective action was needed to avoid incorrect denials, but the court required SSA to implement the special quality assurance reviews for these 10 states. Under the court order, during the first month after the new regulations were in effect, SSA had to review the lesser of 100 or all denials for each denial-prone state. SSA reviewed only 25 denials for other states. A subsequent March 1991 court order required SSA, after the first month, to review at least 1,000 denials per month nationwide. SSA’s sample of 1,000 denials included 15 percent of the denials from each of the 10 denial-prone states. By memorandum in February 1991, SSA informed all DDSs of the special quality assurance requirements and identified the 10 states that had been classified as denial prone. The court order required that SSA send the results of the quality assurance reviews monthly to the Zebley plaintiff’s counsel. The Zebley plaintiff’s counsel wrote to the SSA Commissioner citing a “disturbing pattern” of low allowance rates in eight states and asked the Commissioner to take remedial steps. In a newsletter to legal aid societies, the Zebley counsel listed 13 DDSs whose cumulative allowance rates were at 50 percent or below. The counsel encouraged legal aid society representatives in those states to contact the DDS directors and “confront them with their sub-par performance.” SSA considers behavioral and learning disorders to be the most susceptible to coaching and malingering. In 1994, SSA’s Office of Disability in Baltimore reviewed a national sample of 617 school-age children who had applied due to behavioral and learning disorders. Because the sample was small, the findings of the study cannot be projected to the universe of childhood disability claims or to the subset of specific impairments studied. The 617 children were selected from those who had applied due to such impairments as attention deficit disorder, attention deficit hyperactivity disorder, personality disorder, conduct disorder, learning disorder, oppositional defiant disorder, anxiety disorder, developmental delay, behavior disorder, speech and language disorders, borderline intellectual functioning, and adjustment disorder. According to SSA, these types of disorders constitute about 20 percent of all childhood disability applications. SSA excluded cases involving extremely severe mental disorders, such as psychotic disorders and mental retardation. SSA selected the 617 cases from final DDS decisions that SSA’s regional quality assurance staff had already reviewed for accuracy. The 617 cases in the sample consisted of 325 awards and 292 denials that DDSs adjudicated during October 1992 through July 1993. SSA reviewed case file documentation for the 617 cases. In its review of case file documentation, SSA considered coaching to be involved in any claim in which the child reported or an information source suspected that the parent or other caregiver had told the child to act or respond in a manner that would make the child appear more functionally limited than he or she actually was. In addition, SSA looked for evidence indicating that the child had malingered; that is, deliberately provided wrong information or did not put forth his or her best effort during testing. SSA found only 13 cases that showed any evidence of possible coaching or malingering, and only 3 of these cases were awards. In all cases, the evidence indicating possible coaching was provided by medical professionals or psychologists who performed consultative examinations for SSA. None of the evidence indicating possible coaching or malingering was provided by schools. The three questioned awards involved children who may have malingered during IQ testing. In these cases, however, the awards were based on factors other than the results of the testing. For example, one child with an oppositional defiant disorder appeared to malinger during IQ testing administered by a consultative examiner, but the award was based on other problems stemming from the disorder, not the results of the testing. Of the 325 awards reviewed by SSA, SSA found that 8.6 percent (28) should have been denials and another 27.7 percent (90) should not have been made without obtaining more supporting documentation. We asked SSA, based on experience in its quality assurance program, to estimate how many of the 90 cases with insufficient documentation would have been denials if all documentation had been obtained, and SSA estimated that 13 (or 4 percent of the 325 awards) would have been denials. Thus, we concluded that a total of 41 awards (12.6 percent of the 325 awards) should have been denials. By contrast, of 292 denials reviewed in the study, SSA found that only 1.4 percent (4) should have been awards, and another 1.4 percent (4) should not have been made without obtaining more supporting documentation. Combining all decisional and documentational errors for the 617 denials and awards in SSA’s study, the overall error rate for this group of cases was 20.4 percent. This is about twice the maximum acceptable error rate of 9.4 percent that SSA allows for decisional and documentational errors combined for all initial disability decisions made by an individual DDS. According to SSA’s Office of Disability, a primary reason that DDSs made awards that should have been denials was that DDSs had frequently overrated—but rarely underrated—the severity of children’s functional limitations. Such overrating occurred primarily because DDSs had (1) compared the child with the perfect child rather than the average child, (2) based the limitation on a single incident rather than behavior over time, (3) not considered the child’s ability to function while on an effective medication regimen, and (4) based the limitation on the child’s life circumstances rather than the effects of a medically determinable impairment. DDSs also had mechanically applied SSA’s guidelines on how to make awards using the results of the IFA process. SSA’s guidelines instruct DDSs that they generally should award benefits to children who have moderate limitations in any three of the areas of ability assessed in the IFA process. SSA found, however, that DDSs had used this instruction as a rule rather than a guideline. DDSs had automatically made awards to any child with three moderate limitations, regardless of how strong or weak the moderate limitations were. SSA stated that its guideline assumed “three good, solid moderates.” SSA found that, when DDSs had identified two moderate limitations, they sometimes made special attempts to find a third moderate limitation even though the evidence did not support it. DDSs had also “double-weighed” the effects of impairments in more than one of the areas of ability assessed in the IFA process, making the impairment seem more severe and pervasive than it actually was. For example, in some cases children displayed a lack of self-control by exhibiting more than one inappropriate behavior, such as fighting, aggressive behavior, disrespectful behavior, lying, oppositional behavior, and stealing. Although all these behaviors should have been rated only in the personal/behavioral area, DDSs had rated some behaviors in the personal/behavioral area and others in the social abilities area, giving the child moderate limitations in two areas rather than only one. This meant that the child needed only one more moderate limitation to have the three moderate limitations needed for approval. SSA also found that DDSs had sometimes based decisions on old evidence when current evidence indicated children had improved and that DDSs had sometimes assessed limitations that could not be attributed to medical impairments. As the IG reported in January 1995, IG staff reviewed the case files for a sample of 553 children whose applications were adjudicated by DDSs in 1992. Of the 553 children, 298 had been awarded benefits by 10 DDSs—Connecticut, Illinois, Kentucky, New York, North Carolina, North Dakota, Pennsylvania, South Dakota, Vermont, and Wisconsin. The remainder of the 553 cases consisted of a nationwide sample of 255 denials. Of the 298 awards, 129 (43 percent) had been decided based on an IFA, and 195 of the 255 denials (76 percent) had been decided based on an IFA. The IG targeted its study at cases involving mental retardation, attention deficit hyperactivity disorder, and other learning and behavioral disorders. Based on its review of these cases, IG officials told us that they had found no evidence of coaching. As the IG reported, when the IG staff had questions about the accuracy of a DDS disability determination or about the sufficiency of the evidence supporting a determination, the IG provided the case file to SSA’s Office of Disability in Baltimore—the same staff responsible for conducting SSA’s study of 617 childhood disability claims. The Office of Disability reviewed the accuracy of each of the questioned cases. The IG staff also visited the 10 DDSs to obtain their opinions on the adequacy of the SSA guidelines used to make disability determinations. Of the 129 awards reviewed that were based on IFAs, the IG reported that 17 (13 percent) should have been denials and another 38 (29 percent) were based on insufficient evidence. The IG attributed this problem to DDSs having difficulty in interpreting and complying with SSA guidelines for obtaining and evaluating evidence concerning the severity of the mental impairments of children on whom IFAs are conducted. The IG stated that these children have less severe impairments than those children determined to be disabled based on the impairment listing, making the assessment of the effects of their impairments on their ability to function age-appropriately more difficult. In discussions with employees of the 10 DDSs, the IG reported that many expressed concern that the SSA guidelines for determining disability for children with mental impairments were not sufficiently clear or objective. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the effects of the judicially mandated individualized functional assessment (IFA) process on Supplemental Security Income (SSI) benefits, focusing on: (1) allegations that parents may be coaching their children to fake mental impairments to qualify under the lower eligibility standards created by IFA; and (2) how IFA affects the children's eligibility for benefits. GAO found that: (1) the judicial decision that required changes in IFA essentially made the process for determining disability in children analogous to the adult process; (2) the new process assesses how children's impairments limit their ability to act and behave like unimpaired children of similar age; (3) it has become important to obtain evidence of disability from nonmedical sources as part of the children's assessment; (4) although the court required a new type of assessment for disabled children, it did not define the degree of limitation necessary to qualify for SSI benefits; (5) before the IFA process was introduced in 1991, the national award rate for all types of childhood cases was 38 percent, but the award rate jumped to 56 percent in the first 2 years after IFA regulations were issued; (6) the non-medical aspects of the IFA evaluation rely heavily on adjudicator judgment; (7) while the Social Security Administration (SSA) has attempted to improve the process, and thereby reduce fraud and improve accuracy in awards, IFA has an underlying conceptual problem; (8) although the IFA process attempts to improve accuracy, the presence of coaching by parents is almost impossible to detect; and (9) more consistent eligibility decisions could be made if adjudicators based functional assessments of children on the functional criteria in SSA medical listings.
JPDO has made progress in planning NextGen by facilitating collaboration among its partner agencies, working to finalize key planning documents, and improving its collaboration and coordination with FAA. Among the challenges JPDO faces are institutionalizing collaboration among the partner agencies, and identifying and exploring questions related to which entity will fund and conduct the research and development needed to meet NextGen requirements. JPDO has made progress in many areas in planning NextGen, as we reported in November 2006. I will highlight just a few of those areas in this testimony. First, JPDO has taken several actions that are consistent with practices that facilitate interagency collaboration—an important point given how critical such collaboration is to the success of JPDO’s mission. For example, the JPDO partner agencies worked together to develop a high level plan for NextGen along with eight strategies that broadly address the goals and objectives for NextGen. JPDO has since issued two annual updates to this plan, as required by Congress. Also, JPDO’s organizational structure involves federal and nonfederal stakeholders throughout. This structure includes a federal interagency senior policy committee, an institute for nonfederal stakeholders, and eight integrated product teams that bring together federal and nonfederal experts to plan for and coordinate the development of technologies that will address JPDO’s eight broad strategies. JPDO has also begun leveraging the resources of its partner agencies in part by reviewing their research and development programs, identifying work to support NextGen, and working to minimize duplication of research programs across the agencies. For example, one opportunity for coordination involves aligning aviation weather research across FAA, NASA, and the Departments of Commerce and Defense, developing a common weather capability, and integrating weather information into NextGen. In addition to developing and updating its high-level integrated plan, first published in December 2004, JPDO has been working to develop several critical documents that form the foundation of NextGen planning, including a draft concept of operations and an enterprise architecture. The concept of operations describes how the transformational elements of NextGen will operate in 2025. It is intended to establish general stakeholder buy-in to the NextGen end state, a transition path, and a business case. The enterprise architecture follows from the concept of operations and will describe the system in more detail (using the federal enterprise architecture framework). It will be used to integrate NextGen efforts of the partner agencies. The draft concept of operations has been posted to JPDO’s Web site for stakeholder review and comment. According to JPDO, an expanded version of the enterprise architecture is expected in mid-2007. Progress has also been made in improving the collaboration and coordination between JPDO and FAA—the agency largely responsible for the implementation of NextGen systems and capabilities. FAA has expanded and revamped its Operational Evolution Plan (OEP)—renamed the Operational Evolution Partnership—to become FAA’s implementation plan for NextGen. The OEP is being expanded to apply to all of FAA and is intended to become a comprehensive description of how the agency will implement NextGen, including the required technologies, procedures, and resources. An ATO official told us that the new OEP is to be consistent with JPDO’s key planning documents and partner agency budget guidance. According to FAA, the new OEP will allow it to demonstrate appropriate budget control and linkage to NextGen plans and will force FAA’s research and development to be relevant to NextGen’s requirements. According to FAA documents, the agency plans to publish the new OEP in June 2007. In an effort to further align FAA’s efforts with JPDO’s plans for NextGen, FAA has created a NextGen Review Board to oversee the OEP. This Review Board will be co-chaired by JPDO’s Director and ATO’s Vice President of Operations Planning. Initiatives, such as concept demonstrations or research, proposed for inclusion in the OEP, will now need to go through the Review Board for approval. Initiatives are to be assessed for relation to NextGen requirements, concept maturity, and risk. An ATO official told us that the new OEP process should also help identify some smaller programs that might be inconsistent with NextGen and which could be discontinued. Additionally, as a further step towards integrating ATO and JPDO, the administration’s reauthorization proposal calls for the JPDO Director to be a voting member of FAA’s Joint Resources Council and ATO’s Executive Council. Although JPDO has established a framework for collaboration, it has faced a challenge in institutionalizing this framework. As JPDO is a coordinating body, it has no authority over its partner agencies’ key human and technological resources needed to continue developing plans and system requirements for NextGen. For example, JPDO has been working to establish a memorandum of understanding (MOU) with its partner agencies to more clearly define partner agencies’ roles and responsibilities since at least August 2005. As of March 16, 2007, however, the MOU remained unsigned. Another key activity for strengthening the collaborative effort will be synchronizing the NextGen enterprise architecture with the partner agencies’ enterprise architectures. These types of efforts, which would better institutionalize JPDO’s collaborative framework throughout the partner agencies, will be critical to JPDO’s ability to leverage the necessary funding for developing NextGen. Institutionalization would help ensure that, as administrations and staffing within JPDO change over the years, those coming into JPDO will have a clear understanding of their roles and responsibilities and of the time and resource commitments entailed. JPDO faces a challenge in developing a comprehensive cost estimate for the NextGen effort. In its recent 2006 Progress Report, JPDO reported some cost estimates related to FAA’s NextGen investment portfolio, which I will discuss in more detail later in this statement. However, JPDO is still working to develop an understanding of the future requirements of its other partner agencies and the users of the system. JPDO stated that it sees its work in estimating costs as an ongoing process. The office notes that it will gain additional insight into the business, management, and technical issues and alternatives that will go into the long-term process of implementing NextGen as it continues to work with industry, and that it expects its cost estimates to continue to evolve. Another challenge facing JPDO is exploring potential gaps in the research and development necessary to achieve some key NextGen capabilities and to keep the development of new systems on schedule. In the past, a significant portion of aeronautics research and development, including intermediate technology development, has been performed by NASA. However, our analysis of NASA’s aeronautics research budget and proposed funding shows a 30 percent decline, in constant 2005 dollars, from fiscal year 2005 to fiscal year 2011. To its credit, NASA plans to focus its research on the needs of NextGen. However, NASA is also moving toward a focus on fundamental research and away from developmental work and demonstration projects. FAA is currently assessing its capacity to address these issues. Currently it is unknown how all of the significant research and development activities inherent in the transition to NextGen will be conducted or funded. Still another challenge facing JPDO is ensuring that all relevant stakeholders are involved in the effort. Some stakeholders, such as current air traffic controllers and technicians, will play critical roles in NextGen, and their involvement in planning for and deploying the new technology will be important to the success of NextGen. In November 2006, we reported that air traffic controllers were not involved in the NextGen planning effort. Controllers are beginning to become involved as the controllers’ union is now represented on a key planning body. However, technicians are currently not participating in NextGen efforts. Input from current air traffic controllers who have recent experience controlling aircraft and current technicians who will maintain the new equipment is important is considering human factors and safety issues. Our work on past air traffic control modernization projects has shown that a lack of stakeholder or expert involvement early and throughout a project can lead to cost increases and delays. Addressing human factors issues is another key challenge for JPDO. For example, the NextGen concept of operations envisions that pilots will take on a greater share of the responsibility for maintaining safe separation and other tasks currently performed by controllers—raising human factors questions about whether pilots can safely perform these additional duties. According to JPDO, the change in the roles of controllers and pilots is the most important human factors issue involved in creating NextGen but will be difficult to research because data on pilot behavior are not readily available for use in creating models. Finally, we reported in November 2006 that establishing credibility was viewed by the majority of the expert panelists we consulted as a challenge facing JPDO. This view partially stems from past experiences in which the government has stopped some modernization efforts after industry invested in supporting technologies. Stakeholders’ belief that the government is fully committed to NextGen will be important as efforts to implement NextGen technologies move forward. Another credibility challenge for JPDO is convincing stakeholders that the collaborative effort is making progress toward facilitating implementation. To address this challenge, the new Director of JPDO is planning to implement some structural and procedural changes to the office. For example, the Director has proposed changing JPDO’s integrated product teams into “working groups” that would task small teams with exploring specific issues and delivering discrete work products. These changes have not yet been implemented at JPDO and it will take some time before the effectiveness of these changes can be evaluated. FAA is a principal player in JPDO’s efforts and will be the chief implementer of NextGen. Successful implementation will depend, in part, on how well FAA addresses its challenges of institutionalizing its recent improvement in managing air traffic control modernization efforts, addressing the cost challenges of implementing NextGen while safely maintaining the current air traffic control system, and obtaining the expertise needed to implement a system as complex as NextGen. I turn now to these challenges. A successful transition to NextGen will depend, to a great extent, on FAA’s ability to manage the acquisition and integration of multiple NextGen systems. Since 1995, we have designated FAA’s air traffic control modernization program as high risk because of systemic management and acquisition problems. In recent years, FAA has taken a number of actions to improve its management of acquisitions. Realization of NextGen goals could be severely compromised if FAA’s improved processes are not institutionalized and carried over into the implementation of NextGen, which is an even more complex and ambitious undertaking than past modernization efforts. To its credit, FAA has taken a number of actions to improve its acquisition management. By creating the Air Traffic Organization (ATO) in 2003, and appointing a Chief Operating Officer (COO) to head ATO, FAA established a new management structure and adopted more leading practices of private sector businesses to address the cost, schedule, and performance shortfalls that have plagued air traffic control acquisitions. ATO has worked to create a flatter organization, with fewer management layers, and has reported reducing executive staffing by 20 percent and total management by 16 percent. In addition, FAA uses a performance management system to hold managers responsible for the success of ATO. More specifically, to better manage its acquisitions and address problems we have identified, FAA has established strategic goals to improve its acquisition workforce culture and build towards a results-oriented, high-performing organization; developed and applied a process improvement model to assess the maturity of its software and systems acquisitions capabilities resulting in, among other things, enhanced productivity and greater ability to predict schedules and resources; and reported that it has established a policy and guidance on using Earned Value Management (EVM) in its acquisition management system and that 19 of its major programs are currently using EVM. Institutionalizing these improvements throughout the agency (i.e., providing for their duration beyond the current leadership by ensuring that reforms are fully integrated into the agency’s structure and processes and have become part of its organizational culture) will continue to be a challenge for FAA. For example, the agency has yet to implement its cost estimating methodology, although, according to the agency, it has provided training on the methodology to employees. Furthermore, FAA has not established a policy to require use of its process improvement model on all major acquisitions for the national airspace system. Until the agency fully addresses these legacy issues, it will continue to risk program management problems affecting cost, schedule, and performance. With a multi-billion dollar acquisition budget, addressing these issues is as important as ever. While FAA has implemented many positive changes to its management processes, it currently faces the loss of key leaders. We have reported that the experiences of successful transformations and change management initiatives in large public and private organizations suggest that it can take 5 to 7 years or more until such initiatives are fully implemented and cultures are transformed in a sustainable manner. Such changes require focused, full-time attention from senior leadership and a dedicated team. FAA’s management improvements are relatively recent developments, and the agency will have lost two of its significant agents for change—the Administrator and the COO—by the end of September. The administrator’s term ends in September 2007; the COO left in February 2007, after serving 3 years. This situation is exacerbated by the fact that the current Director of JPDO is also new, having assumed that position in August 2006. For the management and acquisition improvements to further permeate the agency, and thus provide a firm foundation upon which to implement NextGen, FAA’s new leaders will need to demonstrate the same commitment to improvement as the outgoing leaders. This continued commitment to change is critical over the next few years, as foundational NextGen systems begin to be implemented. Expeditiously moving to find a new COO will help sustain this momentum. JPDO recently reported some estimated costs for NextGen, including specifics on some early NextGen programs. JPDO believes the total federal cost for NextGen infrastructure through 2025 will range between $15 billion and $22 billion. JPDO also reported that a preliminary estimate of the corresponding cost to system users, who will have to equip with the advanced avionics that are necessary to realize the full benefits of some NextGen technologies, ranges between $14 and $20 billion. JPDO noted that this range for avionics costs reflects uncertainty about equipage costs for individual aircraft, the number of very light jets that will operate in high-performance airspace, and the amount of out-of-service time required for installation. In its Capital Investment Plan for fiscal years 2008-2012, FAA includes estimated expenditures for eleven line items that are considered NextGen capital programs. The total 5-year estimated expenditures for these programs are $4.3 billion. In fiscal year 2008, only six of the line items are funded for a total of roughly $174 million; funding for the remaining five programs would begin with the fiscal year 2009 budget. According to FAA, in addition to capital spending for NextGen, the agency will also spend an estimated $300 million on NextGen-related research and development from fiscal years 2008 through 2012. Also, the administration’s budget for fiscal year 2008 for FAA includes $17.8 million to support the activities of JPDO. It is important to note that while FAA must manage the costs associated with the NextGen transformation, it must simultaneously continue to fund and operate the current national airspace system. In fact, the Department of Transportation’s Inspector General has reported that the majority of FAA’s capital funds go toward the sustainment of current air traffic systems and that, over the last several years, increasing operating costs have crowded out funds for the capital account. Efforts to sustain the current system are particularly important given the safety concerns that could be involved with system outages—the number of which has increased steadily over the last few years as the system continues to age. For example, the adequacy of FAA’s maintenance of existing systems was raised following a power outage and equipment failures in Southern California that caused hundreds of flight delays during the summer of 2006. Investigations by the DOT Inspector General into these incidents identified a number of underlying issues, including the age and condition of equipment. Nationwide, the number of scheduled and unscheduled outages of air traffic control equipment and ancillary support systems has been increasing (see fig. 1). According to FAA, increases in the number of unscheduled outages indicate that systems are failing more frequently. FAA also notes that the duration of unscheduled equipment outages has also been increasing in recent years from an average of about 21 hours in 2001 to about 40 hours in 2006, which may indicate, in part, that maintenance and troubleshooting activities are requiring more effort and longer periods of time. However, the agency considers user impact and resource efficiency when planning and responding to equipment outages, according to an FAA official. As a result, although some outages will have longer restoration times, FAA believes that they do not adversely affect air traffic control operations. It will be important for FAA to monitor and address equipment outages to ensure the safety and efficiency of the legacy systems and a smooth transition to NextGen. As part of managing the costs of system sustainment and system modernization, FAA is seeking ways to reduce costs by introducing infrastructure and operational efficiencies. For example, FAA plans to produce cost savings through outsourcing and facility consolidations. FAA is outsourcing flight service stations and estimates a $2.2 billion savings over 12 years. Similarly, FAA is seeking savings through outsourcing its planned nationwide deployment of Automatic Dependent Surveillance- Broadcast (ADS-B), a critical surveillance technology for NextGen. FAA is planning to implement ADS-B through a performance-based contract in which FAA will pay “subscription” charges for the ADS-B services and the vendor will be responsible for building and maintaining the infrastructure. (FAA also reports that the ADS-B rollout will allow the agency to remove 50 percent of its current secondary radars, saving money in the ADS-B program’s baseline.) As for consolidating facilities, FAA is currently restructuring its administrative service areas from nine offices to three offices, which FAA estimates will save up to $460 million over 10 years. We have previously reported that FAA should pursue further cost control options, such as exploring additional opportunities for contracting out services and consolidating facilities. However, we recognize that FAA faces challenges with consolidating facilities, an action that can be politically sensitive. In recognition of this sensitivity, the administration has proposed in FAA’s reauthorization proposal that the Secretary of Transportation be authorized to establish an independent, five-member Commission, known as the Realignment and Consolidation of Aviation Facilities and Services Commission, to independently analyze FAA’s recommendations to realign facilities or services. The Commission would then send its own recommendations to the President and to Congress. In the past, we have noted the importance of potential cost savings through facility consolidations; however, it must also be noted that any such consolidations must be handled through a process that solicits and considers stakeholder input throughout, and fully considers the safety implications of any proposed facility closures or consolidations. In the past, a lack of expertise contributed to weaknesses in FAA’s management of air traffic control modernization efforts, and industry experts with whom we spoke questioned whether FAA will have the technical expertise needed to implement NextGen. In addition to technical expertise, FAA will need contract management expertise to oversee the systems acquisitions and integration involved in NextGen. In November, we recommended that FAA examine its strengths and weaknesses with regard to the technical expertise and contract management expertise that will be required to define, implement, and integrate the numerous complex programs inherent in the transition to NextGen. In response to our recommendation, FAA is considering convening a blue ribbon panel to study the issue and make recommendations to the agency about how to best proceed with its management and oversight of the implementation of NextGen. We believe that such a panel could help FAA begin to address this challenge. To conclude, transforming the national airspace system to accommodate much greater demand for air transportation services in the years ahead will be an enormously complex undertaking. JPDO has made strides in meeting its planning and coordination role as set forth by Congress, and FAA has taken several steps in recent years that better position it to successfully implement NextGen. If JPDO and FAA can build on their recent achievements and overcome the many challenges they face, the transition to NextGen stands a much better chance for success. Mr. Chairman, this concludes my statement. I am pleased to answer any questions you or members of the Subcommittee might have. For further information on this testimony, please contact Susan Fleming at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Gerald Dillingham, Matthew Cook, Anne Dilger, Sharon Dyer, Colin Fallon, Heather Krause, Edmond Menoche, Faye Morrison, and Carrie Wilks. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The skies over America are becoming more crowded every day. The consensus of opinion is that the current aviation system cannot be expanded to meet this projected growth. Recognizing the need for system transformation, in 2003 Congress authorized the Joint Planning and Development Office (JPDO) and requires the office to operate in conjunction with multiple federal agencies, including the Departments of Transportation, Commerce, Defense, and Homeland Security; the Federal Aviation Administration (FAA); the National Aeronautics and Space Administration (NASA); and the White House Office of Science and Technology Policy. JPDO is responsible for coordinating the related efforts of these partner agencies to plan the transformation to the Next Generation Air Transportation System (NextGen): a fundamental redesign of the national airspace system. FAA will be largely responsible for implementing the policies and systems necessary for NextGen, while safely operating the current air traffic control system. GAO's testimony focuses on (1) the progress that JPDO has made in planning NextGen and some challenges it continues to face and (2) the challenges that FAA faces transitioning to NextGen. GAO's statement is based on our recent reports as well as ongoing work, all of which has been conducted in accordance with generally accepted government auditing standards. JPDO has made substantial progress in planning NextGen, but continues to face several challenges. JPDO has established a framework to facilitate federal interagency collaboration and is involving nonfederal stakeholders in its planning efforts. JPDO has begun leveraging the resources of its partner agencies and is finalizing key planning documents such as the concept of operations and the enterprise architecture. The draft concept of operations has been posted to JPDO's Web site for public comment and the enterprise architecture is expected to be completed in the next few months. JPDO and FAA have improved their collaboration and coordination by expanding and revamping FAA's Operational Evolution Plan--renamed the Operational Evolution Partnership--which is intended to provide an implementation plan for FAA for NextGen. Among the challenges JPDO faces are institutionalizing the interagency collaboration that is so central to its mission, developing a comprehensive cost estimate, and addressing potential gaps in research and development for NextGen. In transitioning to NextGen, FAA faces several challenges. Although FAA has taken several actions to improve its management of current air traffic control modernization efforts, institutionalizing these improvements will require continued strong leadership, particularly since the agency will have lost two of its key agents for change by September 2007. Costs are another challenge facing FAA as it addresses the resource demands that NextGen will likely pose, while continuing to maintain the current air traffic control system. Finally, determining whether it has the technical and contract management expertise necessary to implement NextGen is a challenge for FAA.
Generally, the Congress authorizes the Corps’ water resources projects every 2 years through a Water Resources Development Act. After project authorization, the Corps may request construction appropriations in order to initiate a project; the Congress might not appropriate construction funding for all authorized projects. The Corps uses its construction funds to support both mitigation and construction activities. According to staff in the Corps’ Civil Works Program, between the 1986 act and September 30, 2001, 217 water resources projects were authorized, 150 of these received construction appropriations, 103 of these 150 projects did not require a fish and wildlife mitigation plan, and 47 of the 150 projects required a plan. Under the Federal-aid Highway Program, the Federal Highway Administration must ensure compliance with federal, state, and local environmental laws and regulations. The administration apportions funds to state transportation departments for planning and constructing the national highway infrastructure. State governments determine the priorities and distribute the funds. Of the 47 Civil Works projects authorized since the 1986 act that required a fish and wildlife mitigation plan and that received construction appropriations, 28 projects completed less than 50 percent of the mitigation before project construction began, according to the Corps. Of the remaining 19 projects, 7 completed at least 50 percent of mitigation before initiating construction; 2 had not started construction but had done some mitigation; and 10 had not started construction or mitigation. As of September 30, 2001, 16 of the 34 projects where construction had begun had completed 100 percent of the mitigation. The 1986 act requires the Corps to initiate mitigation before or concurrent with construction, but it does not specify the amount of mitigation required—nor does any subsequent Water Resources Development Act. According to the Corps, it may not complete 50 percent of the mitigation prior to initiating project construction for the following reasons: The proposed mitigation will occur in the construction area or when material excavated during construction is used to create the mitigation. For example, the Corps creates wetlands with material dredged from a navigation channel. Mitigation activities may be scheduled concurrently with construction as a logical construction sequence. The Corps considers “construction” to begin when it receives construction appropriations, not when it actually starts construction, even though months or years may pass between the two dates. Therefore, since construction appropriations fund both mitigation and construction activities, mitigation cannot technically begin before construction begins. The panel of scientific experts rated as similar the overall quality of the national fish and wildlife mitigation guidance for the Corps of Engineers’ Civil Works and Regulatory Programs as well as the Federal Highway Administration’s Federal-aid Highway Program. Most panelists rated the overall quality as “moderate” or “good.” The Highway Program, however, received more “good” ratings than the Corps’ two programs. In commenting on possible improvements to the guidance, the panelists suggested a unified body of guidance for the three programs, more information on the monitoring and evaluation stages, or more discussion of uplands species and habitat. Based on the guidance alone, panelists expressed concerns about their ability to reliably estimate the percent of success mitigation projects would have in restoring the natural hydrology and native vegetation and in supporting native fish and wildlife species. Panelists said factors other than guidance, such as major storms that are difficult to control or manage or invasive weeds or wildlife species that dominate the site unexpectedly, affect the success of mitigation projects. When asked to rate the overall quality of the collective guidance for each of the three programs, panelists generally rated it “moderate” or “good,” as shown in table 1. The distribution of moderate and good ratings varied slightly across programs. When assessing the quality of the three programs’ guidance collectively, some panelists indicated that the guidance was strong because of its clarity or currency, or the inclusion of ample technical guidance. Some panelists, however, were critical of the three programs’ guidance overall, noting that the guidance emphasizes the early determination and design stages to the detriment of the monitoring and evaluation stages, emphasizes wetlands to the detriment of uplands or adjacent lands, or fails to require corrective actions in those instances where projects do not succeed. In commenting on the strengths of the Civil Work’s guidance, some panelists indicated that the guidance emphasizes an ecosystem approach and considers adjacent lands and uplands; includes a good integration of other agencies’ roles and responsibilities and the various laws and policies; or provides good technical guidance for the design, construction, and monitoring stages. The majority of panelists, however, criticized the Corps’ reliance on economic tradeoffs to determine the acceptable mitigation alternatives as presented in the Economic and Environmental Principles and Guidelines for Water and Related Land Resources Implementation Studies. Several panelists indicated that the Corps’ reliance on this guidance interferes with the current thinking, which emphasizes selecting the least damaging alternative and considering adjacent lands when determining which alternative to select. In addition, some panelists criticized the Civil Works guidance as possibly being too broad, too detailed, incomplete as it relates to determining how much and what kind of mitigation should be undertaken, lacking examples of mitigation, or not current because it does not consider mitigation activities in a landscape context. In assessing the strengths and weaknesses of the Corps’ Regulatory Program’s guidance, the panelists primarily commented on the recently issued October 2001 Regulatory Guidance Letter. The panelists generally viewed the new guidance as an improvement over existing guidance because it is clearer, simpler, and more in line with current technical findings; strengthens the importance of watershed contextand functionality of affected areas; enhances the existing guidance in the areas of determination and evaluation and places new emphasis on ecosystems rather than citing a preference for on-site in-kind mitigation; suggests the consideration of landscape setting and indicates a continuing evolution to a function-based approach to mitigation; is a positive step toward helping assess and quantify the amount of mitigation that is required; calls for monitoring to be included as a permit condition; or provides more definitive instructions on how to determine mitigation ratios and types of mitigation, and addresses the long-term viability of mitigation through establishing success criteria. While complimenting the new guidance, panelists also identified weaknesses. Namely, it still lacks the details and performance measures to truly advance wetlands protection; continues to need to strengthen monitoring and evaluation activities; still lacks sufficient specifics on how much and what type of mitigation is needed and what functions should be replaced; does not provide specifics on how landscape settings should be allows credit for efforts undertaken in uplands, which means that wetlands functions and values will less likely be replaced in those situations; or continues to lack guidance on the minimum requirements of a conceptual mitigation plan. In assessing the Highway Program’s guidance, panelists were generally more complimentary of its content and presentation than of the Corps’ guidance. Several panelists found the guidance to be clearer and more focused; be more effective in communicating current thinking; be more user-friendly, with a step-by-step format; provide the right amount of background information and technical include design options and examples; be stronger than the Corps’ guidance in ensuring the long-run viability of the project because it calls for a compensation ratio greater than 1 to 1; or be more professionally presented because it allows the exercise of professional judgment. Panelists cited few weaknesses with the Highway Program’s guidance, and they did not point out the same weaknesses. For example, one panelist noted that monitoring activities involved monitoring compliance with the mitigation design rather than with measuring the functions and values to determine replacement success. This same panelist reported that more monitoring of construction is needed because mitigation will fail because of construction flaws and not because of design problems. Another panelist found that the guidance overemphasizes the use of mitigation banks, which may not always be appropriate. One panelist appeared to sum up the panelists’ comments, stating “. . . he three programs reviewed are within reach of mitigating many, but not all types of wetland habitats for fish and wildlife. . . . . With modest improvements in guidance, the combined efforts of the three programs could reach a higher level of successful wetlands mitigation.” Panelists offered several suggestions for improving both the format and the content of the three programs’ fish and wildlife mitigation guidance. In terms of format, almost all the panelists suggested the need for a single, unified body of guidance that would include both the regulatory and technical details necessary to effect successful mitigation. Doing so, according to some panelists, would improve the usability and readability of the guidance and better achieve consistency in operations and results. Among the suggested improvements, panelists recommended that the unified guidance include user-friendly, step-by-step instructions that tell applicants when they have to mitigate and that provide a general idea of how much mitigation will be required; current guidance regularly updated on a website; annotated outlines, more illustrations, case studies, and examples of lessons learned from past failures or successes and the reasons for them; a requirement for an operations, maintenance, and rehabilitation funding plan to provide greater assurance that all project services will be provided over a broad range of contingencies; a technically appropriate and consistent set of sampling measures applied throughout all stages of mitigation; or opportunities for flexibility and the exercise of professional judgment. Regarding the content of the guidance, some panelists strongly urged that more guidance be included on the monitoring and evaluation aspects of mitigation projects. Two panelists recommended ongoing—“life cycle”— monitoring to evaluate the effectiveness of mitigation in light of explicit performance criteria and to provide a rationale for corrective action where appropriate. Some panelists suggested that the expanded monitoring and evaluation requirements should include systems to provide for feedback of evaluation results or that a separate budget be designated for monitoring and evaluation to ensure that adequate data be collected to determine project success. Panelists also suggested that the content of the current guidance be expanded to more fully include discussions on uplands species and habitat other than vegetated wetlands, such as open waters, streams, or stream banks. In addition, most of the panelists suggested that the current guidance more fully discuss the functions and values and how to determine the best way to replace them. According to one panelist, once the key functions are determined, general guidance should exist on how to translate the replacement of these functions into compensation ratios and combinations of in-kind and out-of-kind mitigation to ensure that the key functions are replaced. Another panelist suggested that permits should be denied if the functions and values will not be compensated and indicated that this requirement would decrease the likelihood of environmental degradation and increase the likelihood of successfully replacing lost functions. We asked the panelists to estimate the percentage of success that mitigation projects could be expected to achieve in restoring the natural hydrologic conditions and native vegetation, and otherwise supporting native fish and wildlife species, under two circumstances: (1) if the present mitigation guidance were followed and (2) if the guidance were followed after being improved in the ways panelists proposed. Some panelists expressed concern in providing estimates because of (1) a lack of an empirical basis for any estimate, (2) insufficient first-hand knowledge about how closely the guidance is being followed, (3) insufficient basis for connecting success or failure with the degree to which the guidance was followed, or (4) a lack of knowledge about the competencies of the persons implementing the guidance. The panelists emphasized that any numbers provided would not be reliable, and GAO agrees. Panelists did, however, provide insights to the primary factors, other than the guidance, that could prevent a project from restoring hydrologic conditions, restoring native vegetation, and otherwise supporting native fish and wildlife species. The panel explained that project success could be affected by lack of experience or competence of those doing the work, or lack of proper project management; cost constraints or inadequate funding; poor site selection, poor construction, or improper implementation of lack of control and/or lack of attention to surrounding landscape conditions, or external influences from adjacent areas such as urban development, heavy infestations of exotic species, and human and animal impacts; unexpected conditions, such as major storms, that are difficult to control or manage or invasive weeds or wildlife species that dominate the site unexpectedly; inadequate monitoring for fish and wildlife values and more focus on the easier measurement of hydrology and vegetation success; monitoring to determine compliance with the design plan rather than monitoring functions and values, thus failing to account for poor designs; lack of available biological materials, such as no seed bank; problems in creating some types of wetlands because they are inherently difficult to replicate (peat bogs being the extreme example); wetlands that cover extremely small areas, or appropriate land is not ensuring that corrective measures will be taken for failures in the restoration project after construction; or not fully restoring lost hydrology or vegetation if mitigation banks are used to compensate for losses in different watersheds. One panelist noted that a certain percentage of all restorations will fail in the attempt to restore native vegetation and wildlife. According to the panelist, the failure rate for “created” wetlands and other habitats is much higher than for restored sites, so it is important to distinguish the type of site being discussed. Another panelist stated that in some situations, lost functions and values are impossible to replace because of their location within the watershed, the lack of mitigation sites within the watershed, or the type of wetlands that were damaged. A third panelist noted that, in general, restoring “natural hydrologic conditions” is only possible in “restoration” efforts (rather than “creation” or “enhancement” efforts or both), and this is only a portion of the compensation activities undertaken in these programs. According to the panelist, restoration of natural conditions is most likely to succeed when the impacts of projects occur only on the site under restoration. In all other circumstances, the panelist said, the probability of success diminishes regardless of the technical sophistication of the practitioner. Furthermore, restoring native vegetation is theoretically possible only when appropriate natural hydrologic conditions have been established. Therefore, success in this effort cannot exceed the success in hydrologic engineering. In addition, restoring native fish and wildlife species is more difficult, generally because the surrounding area has been affected, and thus the landscape setting is uncontrollably altered. In the panelist’s view, restoration of a natural community of species on compensatory mitigation sites is exceptionally difficult. We provided the Departments of Transportation and Defense with copies of the draft report for review and comment. The Department of Transportation reviewed the draft report and chose not to provide comments. The Department of Defense, in its comments, stated its view that GAO’s study has shown that the Corps met the mitigation requirements of section 906 of the Water Resources Development Act of 1986. However, we did not review or evaluate the Corps' overall compliance with section 906 nor did we reach any conclusion in this regard. Additionally, the department clarified that for three projects identified in appendix II for which mitigation had not begun, mitigation is scheduled for later in the construction sequence because site conditions do not allow mitigation to occur earlier. We have added a footnote to the table in appendix II to reflect the Corps’ explanation. In addition, the department raised concerns about the difficulties in comparing the fish and wildlife mitigation guidance of the three programs. Specifically, the department pointed out that the Corps’ two programs are primarily water resource development-oriented, while the Highway Program is oriented to building highways. Additionally, the department said that both the Highway and Civil Works Programs operate on a much longer timeline than the Corps’ Regulatory Program and the Corps’ Regulatory Program’s activities are generally on a much smaller scale and rarely approach the scope of the Civil Works Program. While we agree that the focus of the three programs selected for comparison is different, we believe that the agencies’ programs include similarities in that they are nationwide in scope and provide for mitigation against environmental impacts to fish and wildlife in the course of their construction activities. Additionally, our panelists did not express concern that the differences among the three programs affected their ability to assess the content and format of the three agencies’ fish and wildlife mitigation guidance. A copy of the Department of Defense’s detailed comments is included as appendix VI. We conducted our work from February 2001 through April 2002 in accordance with generally accepted government auditing standards. Details of our scope and methodology are discussed in appendix I. We are sending copies of this report to the secretaries of defense and transportation, the principal deputy assistant secretary of the army (civil works) and the administrator, Federal Highway Administration. We will also provide copies to others on request. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix VIII. The Water Resources Development Act of 2000 (P.L. 106-541, section 224 (b)) required GAO to obtain information on the U.S. Army Corps of Engineers’ efforts to mitigate for adverse impacts on fish and wildlife resources and their habitat in the construction of its water resources projects authorized since the Water Resources Development Act of 1986. In discussions with the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure, we agreed to (1) determine the number of U.S. Army Corps of Engineers’ Civil Works projects for which less than 50 percent of mitigation was completed before the start of project construction and (2) establish a panel of scientific experts to compare the Corps’ Civil Works Program’s national guidance on fish and wildlife mitigation activities with the mitigation guidance for the Corps’ Regulatory Program and with the guidance for the Federal Highway Administration’s Federal-aid Highway Program. To determine the number of the Corps’ water resources projects subject to the mitigation requirement of the 1986 act and the number of those projects not completing 50 percent of the required fish and wildlife mitigation before initiating construction, we formally requested that the Corps provide us with the following information: (1) the universe of projects authorized since the 1986 act; (2) of these authorized projects, the number for which federal construction funds were appropriated; and (3) of the authorized projects receiving federal construction funds, the number that did and did not require a fish and wildlife mitigation plan in accordance with the 1986 act. For those projects requiring a mitigation plan, we asked for the number of projects that had and had not begun construction, the number of projects that had and had not begun mitigation activities, the percentage of mitigation completed before construction began, and the percentage of mitigation completed as of September 30, 2001. We also requested that the Corps provide project- specific information, including project name, location, and purpose or type of project. The Corps solicited the information from its district offices. “Construction is initiated when the first non-mitigation related construction contract is awarded. The compensatory mitigation 50-percent completion point occurs in the fiscal year that the district makes expenditures toward the mitigation plan that cumulatively total at least 50 percent of the estimated cost of these activities. The expenditures could consist of hired labor, contracts, etc., as well as lands, easements, rights-of-way, relocations, and disposal areas required for any compensatory mitigation plan identified in the feasibility report.” Because the congressional committees asked us not to collect original data, we limited our analysis to clarifying any apparent inconsistencies in the Corps’ data with agency officials. The 2000 act requested that we assess the Corps’ Civil Works Program’s mitigation methods compared to those used in other publicly and privately financed mitigation projects and did not specifically identify the other entities. In discussions with committee staffs, we agreed that the scientific panel should review and compare the fish and wildlife mitigation guidance of these entities rather than assessing the methods. Therefore, we needed to (1) identify and select other entities undertaking mitigation activities, (2) obtain the relevant fish and wildlife mitigation guidance from the entities, and (3) establish a scientific panel of experts. To identify which publicly and privately financed projects should be compared with the Corps’ Civil Works Program, we spoke with representatives of the Corps, the Environmental Protection Agency, the Fish and Wildlife Service, the National Marine Fisheries Service, the Bureau of Land Management, the Forest Service, the Federal Highway Administration, the Federal Aviation Administration, the Federal Transit Administration, and the National Academy of Sciences to obtain suggestions for relevant entities to select. On the basis of these discussions, we selected the Corps’ Regulatory Program and the Federal Highway Administration’s Federal-aid Highway Program for comparison to the Corps’ Civil Works Program. Both programs are national in scope and some individual construction projects undertaken could be of the same magnitude as those of the Corps’ Civil Works Program. To obtain the fish and wildlife mitigation guidance, we spoke with representatives of the Corps’ Civil Works and Regulatory Programs, the Federal Highway Administration’s Federal-aid Highway Program, as well as the Corps’ Office of Research and Development to identify the (1) role of national and local/regional mitigation guidance in implementing the agencies’ projects, (2) types of guidance provided to program participants, and (3) guidance the agencies considered to be the key fish and wildlife mitigation guidance. We requested that the agencies provide us with copies of key national policy, procedural, and scientific/technical guidance (including applicable models) on mitigating adverse impacts on fish and wildlife resources and their habitat. We limited our request to national guidance because both the Corps and the Federal Highway Administration rely on local districts, regions, or states to supplement the national guidance to address local environmental considerations, and the potential existed for obtaining voluminous guidance from 38 Corps districts and the 50 states. Such voluminous guidance would be unreasonable for a scientific panel to assess in a short time frame. Initially, the agencies provided about 78 documents—or about 5,400 pages—that they considered to be key national policy, procedural, and scientific/technical guidance. Because of the complexity of the issues involved in assessing this mitigation guidance, we employed a consultant as a technical adviser. The adviser reviewed this guidance and identified documents that potentially could be excluded from the panelists’ review. We met with agency representatives to seek agreement on which documents would be essential to review. From those discussions, we decided to provide the panelists a total of about 2,500 pages of guidance in the categories of (1) policy guidance applicable to all agencies, (2) technical guidance applicable to all agencies, (3) Corps’ Civil Works Program guidance, (4) Corps’ Regulatory Program guidance, and (5) Federal Highway Administration guidance. (See appendix III for the guidance documents the panelists reviewed.) To establish our scientific panel of experts, we needed to identify persons who collectively would possess the necessary knowledge, skills, and experiences related to fish and wildlife mitigation and have a general knowledge of the Corps’ Civil Works and Regulatory Programs and/or the Federal-aid Highway Program. The Environmental Protection Agency, the Fish and Wildlife Service, the National Marine Fisheries Service, the National Academy of Sciences, and some of our staff suggested names of potential panelists. We contacted several of the identified persons, inquired whether they had an interest in serving on the scientific panel, asked them for the names of additional persons whom we might want to consider having on the panel, and received their biographical data. Our technical adviser suggested factors to consider in developing and assessing a pool of candidates, reviewed the list of potential candidates and suggested additional names, and provided recommendations about the size and makeup of the panel. (See appendix VII for a listing of the panel members.) To better ensure the panel’s consistent assessment of the three programs’ fish and wildlife mitigation guidance, we developed an assessment instrument to rate the guidance and included a series of open-ended questions that each panelist would complete. The assessment instrument asked the panelists to rate each program’s guidance for five stages of a mitigation project—determination, design, construction, monitoring, and evaluation. The rating consisted of a numeric score (0 for no guidance to 5 for excellent guidance) for each of five attributes of the guidance (complete, current, clear, broad, and viable) as well as a rating for the overall quality of the guidance for each mitigation stage. For each stage, panelists provided narrative justifications for their ratings. Panelists then rated each program’s collective guidance and provided a narrative summary of the strengths and weaknesses of the guidance and the relative quality of the three programs’ guidance. We also asked the panelists to answer a number of open-ended questions dealing with mitigation. Before sending the assessment instrument to the panelists, we asked two mitigation experts, who were familiar with the three programs and our target population of panelists, to conduct an expert review of our assessment instrument. The experts reviewed the questionnaire for clarity, logic, and to ensure the appropriateness of the questions for the panelists. On October 31, 2001, after we had sent the original material to the panelists, the Corps’ Regulatory Program issued some new mitigation guidance. We subsequently asked the panelists to respond to questions regarding the new guidance, improvements to the mitigation guidance, and estimating the success of mitigation projects. The panelists provided their preliminary assessments, we compiled the responses, and then distributed this compilation to the panelists so they had an opportunity to review and revise their numeric and narrative responses. We conducted our work from February 2001 through April 2002 in accordance with generally accepted government auditing standards. According to the Corps of Engineers, 47 of its 217 water resources projects authorized since the Water Resources Development Act of 1986 required a fish and wildlife mitigation plan and received construction appropriations. Of these, 28 did not complete at least 50 percent of mitigation before the start of project construction. Of the remaining 19 projects, 7 completed at least 50 percent of mitigation; 2 projects had not started actual construction but had done some mitigation; and 10 projects had not started construction or mitigation as of September 30, 2001. Almost half (21) of the 47 projects are located in three states—California (10), Florida (6), and West Virginia (5). Of the 34 projects starting construction, 16 completed 100 percent of the mitigation as of September 30, 2001, according to the Corps. Nearly half (13) of the 28 projects not completing at least 50 percent of mitigation before the start of construction were flood control projects; 11 were navigation-type projects; 3 were bluff stability-type projects; and the remaining project was an irrigation project. Some of the mitigation activities planned for these 28 projects included acquiring lands and obtaining easements; creating wetlands; planting seedlings, trees, shrubs, and other vegetation; creating artificial reefs for shore protection; and protecting slopes with stone. Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Not applicable Mitigation is scheduled for later in the construction sequence because site conditions do not allow mitigation to occur earlier. Executive Order 11990, Protection of Wetlands, (1977) . CEQ Regulations on the National Environmental Policy Act, (1978) [entire document]. FWS mitigation policy, (1981) . Section 404(b)(1) Guidelines, (1980) . EPA/Corps memorandum of agreement concerning section 404(b)(1) guidelines, (1990) . Joint FWS/NMFS/NOAA Regulations on the Endangered Species Act, . Memorandum: Federal Interagency Memorandum of Understanding for Implementation of the Endangered Species Act, (1994) . Multi-agency Guidance on Mitigation Banking, (1995) [entire document]. NMFS Regulations on Essential Fish Habitat, (1997) . Wetlands Engineering Handbook, Corps of Engineers, (2000) [paper copy and compact disc]. “EXHGM: Expert Hydrogeomorphic Approach,” Corps of Engineers’ Fact Sheet, (2000) . “Habitat-Net: An Interactive Network for Habitat Evaluation Professionals,” Corps of Engineers’ Fact Sheet, (2000) . “WIMS: Wildlife Information Management System,” Corps of Engineers’ Fact Sheet, (2000) . “Erosion Control for Restoration and Environmental Benefits,” Corps of Engineers’ Fact Sheet, (2000) . “Wildlife Habitat Restoration and Management,” Corps of Engineers’ Fact Sheet, (2000) . Examples of Performance Standards for Wetland Creation and Restoration in Section 404 Permits and an Approach to Developing Performance Standards, Corps of Engineers, (1999) . Case Study: Application of the HGM Western Kentucky Low-Gradient Riverine Guidebook to Monitoring of Wetland Development, Corps of Engineers, (1999) . Restoration of Mangrove Habitat, Corps of Engineers, (2000) [entire document]. Design and Construction of Docks to Minimize Seagrass Impacts, Corps of Engineers, (1999) . Guidelines for Conducting and Reporting Hydrologic Assessments of Potential Wetland Sites, Corps of Engineers, (2000) . Installing Monitoring Wells/Piezometers in Wetlands, Corps of Engineers, (2000) . Importing Plant Stock for Wetland Restoration and Creation: Maintaining Genetic Diversity and Integrity, Corps of Engineers, (2000) . Evaluating Environmental Effects of Dredged Material Management Alternatives—A Technical Framework, EPA and Corps of Engineers, (1992) . Digest of Water Resource Policies and Authorities, (1999) [chapters 3 and 19]. Planning Guidance Notebook, (2000) . Corps of Engineers NEPA Procedures, (1988) . Economic and Environmental Principles and Guidelines for Water and Related Land Resources Implementation Studies, (1983) [pages iii-ix, 107- 137]. Cost Effectiveness Analysis for Environmental Planning: Nine EASY Steps, (1994) . FWS/Corps Agreement on funding FWCA activities, (1982) [entire document]. Administrative Regulations, 33 C.F.R. Parts 320, 322, 323, 325 and 330. Regulatory Guidance Letter 93-2, on Flexibility of the 404(b)(1) Guidelines and Mitigation Banking . Regulatory Guidance Letter 01-1 on Guidance for the Establishment and Maintenance of Compensatory Mitigation Projects Under the Corps Regulatory Program Pursuant to Section 404(a) of the Clean Water Act and Section 10 of the Rivers and Harbors Act of 1899, (2001) [entire document]. Standard Operating Procedures . Mitigation of Impacts to Wetlands and Natural Habitat, 23 C.F.R. Part 777, (2000) . Memorandum: Participation in Funding for Ecological Mitigation, (1995) . Memorandum: Guidelines for the Consideration of Highway Project Impacts on Fish and Wildlife Resources, (1989) . Fiscal Year 2001 Performance Plan, (2000) . Memorandum: Financial Assurances for Wetland Mitigation Banks, (1997) . Memorandum: Eligibility of “Historic Wetlands” for ISTEA Funding, (1997) . Memorandum: Use of Private Wetland Mitigation Banks as Compensatory Mitigation for Highway Project Impacts, (1995) [entire document]. Memorandum: Funding for Establishment of Wetland Mitigation Banks, (1994) . Memorandum: Wetland Delineation and Mitigation, (1994) [entire document]. DOT Order 5660.1A, Preservation of the Nation’s Wetlands, (1978) [entire document]. NCHRP Report 379: Guidelines for the Development of Wetland Replacement Areas, Transportation Research Board, National Research Council, (1996) . Applying the Section 404 Permit Process to Federal-Aid Highway Projects, FHWA, COE, EPA, FWS, NOAA, (1988) . Highways and Wetlands: Compensating Wetlands Losses, (1986) [entire document]. The panel of scientific experts was tasked with comparing the Corps of Engineers’ Civil Works Program’s, the Corps’ Regulatory Program’s, and the Federal Highway Administration’s Federal-aid Highway Program’s national fish and wildlife mitigation guidance. In assessing this guidance, the panel was to focus on various attributes of the guidance and the five stages of a mitigation project. The panel provided numeric ratings ranging from “0” for no guidance, “1” for poor guidance, “2” for fair guidance, “3” for moderate guidance, “4” for good guidance, to “5” for excellent guidance for the various mitigation stages. Summaries of the panelists’ numeric and narrative responses follow. The determination stage is when the agencies decide (a) whether compensatory mitigation is required for project impacts, and, if so, (b) the amount of mitigation that will be required. Overall, the ratings for the determination stage were the third highest among the five stages. This stage includes two separate decisions— whether mitigation is required and, if so, how much. The panelists felt that, in general, the guidance did a better job on the first decision than on the second. Most panelists cited the existence of governmentwide guidance and how it contributes to determining whether mitigation is required. They indicated that this determination is aided by a clear, long-standing sequential definition of “mitigation” that requires avoidance first, then minimization of impact, and finally mitigation of unavoidable impacts. According to one panelist, however, while the governmentwide guidance provides definitions and indications of desired outcomes, the governmentwide guidance stops short of specifying exactly when or how a program should make a case-specific determination that compensatory mitigation is necessary and/or how much should be required. This panelist indicated that none of the three programs have explicit guidance for determining whether compensatory mitigation is required, and the outcome apparently is more a result of due diligence and quality of staff than quality of regulatory guidance. Various panelists indicated that strengths of the Civil Works Program’s guidance include detailed planning formulation guidance and state-of-the- art planning tools, emphasis on the ecosystem approach and the inclusion of adjacent lands, or the emphasis on resource evaluation to determine the mitigation needed. Other panelists, however, cited weaknesses in the Civil Works’ guidance, including the confusion caused by considering economic tradeoffs in determining which mitigation alternative to select, the lack of currency or consistency in the information included, or the lack of assurance that the resources and functions lost by development will be replaced. Regarding the Regulatory Program’s guidance, several panelists expressed favorable comments regarding the program’s new October 2001 guidance as it strengthens the importance of watershed context and functionality of impacted areas in decision making, emphasizes an ecosystem approach, integrates financial requirements into the permit, recognizes the need for adaptive management, better explains the criteria for determining compensation ratios, provides a more specific mechanism for determining exactly how much mitigation will be required, or details success criteria. Other panelists cited weaknesses, however, including that the detailed guidance was not adequately summarized and presented for ease of use, the guidance placed too much discretion at the Corps’ district level for decisions, it lacked currency or consistency in materials, or the new guidance conflicts with other Regulatory guidance on the issue of preservation. Panelists identified some strengths and weaknesses of the Highway Program’s guidance. Among the strengths cited, panelists indicated that the guidance appropriately emphasized aquatic resources and other habitats with unique or important values under federal law, incorporated ratio goals for wetlands replacement that were considered noteworthy, was more current than the other two programs because of the emphasis on using consolidated mitigation sites in the form of mitigation banks when appropriate, emphasized resource evaluation to determine mitigation need, more effectively conveyed the information necessary to fully understand the process for determining whether compensatory mitigation is required, more effectively called for a compensatory mitigation ratio of 1.5 to 1, or included the preference to fund and then monitor mitigation banks as a means of ensuring the long-term success of the mitigation to increase ecosystem viability. One panelist emphasized the strengths of the Highway Program’s guidance by indicating that it, in sharp contrast to the Corps’ guidance, is clear, concise, summarized in an understandable way, and makes it clear who has what responsibilities. Further, the panelist said that the guidance applies to a range of systems and addresses wetlands and other habitats in an appropriate manner and makes it clear that mitigation is required and that it will be done as compared to the Corps where it leaves it open. Two panelists, however, indicated that the Highway Program’s guidance in this stage was weak because it did not really focus on determining the need for or amount of mitigation required. The design stage includes all preconstruction activities once the decision on the need and extent of compensatory mitigation has been made. It includes the necessary features and performance characteristics of the mitigation project. Overall, the ratings for the design stage were the second highest among the five stages, but panelists provided relatively little narrative comments about this stage. As one panelist pointed out, the design stage is considered more a technical element of mitigation and not as subject to policy guidance as are other stages such as the determination and evaluation stages. Panelists’ generalized comments related to the Wetlands Engineering Handbook and included both strengths and weaknesses. Specifically, some panelists believed that the handbook includes detailed background on wetlands and statistical evaluation techniques; is relatively complete, reasonably current, and very clear; or as one panelist put it, the handbook provides very good technical information on the design, construction, and monitoring of wetland ecosystems. According to the panelist, the information is timeless and remains a standard in the field of mitigation. Yet, other panelists cited weaknesses with the same handbook. Namely, one panelist said that it does not provide a comprehensive explanation of how to design a replacement wetland and puts emphasis only on one technique to evaluate functions, even though the technique has been criticized by wetlands professionals. The panelist further stated that the handbook is very complicated to follow with its overemphasis on statistical techniques over basic design procedures. Another panelist criticized the handbook as addressing the structural aspects of viability but paying little attention to long-term habitat function. Panelists pointed out that the guidance at this stage was weak in the areas of non-wetland fish and wildlife and upland mitigation, such as open waters, streams, stream banks and uplands, and in fact, the guidance was heavily focused on wetland mitigation. Regarding the Civil Works Program’s guidance, two panelists commented on the strength of the guidance. One panelist indicated that the guidance generally includes a relatively complete description of the parameters needed to design a successful and effective wetlands mitigation project and has sufficient criteria on uplands, land use, and other offsite factors. Another panelist said the guidance has enough information related to policy that is pertinent in the selection of the conceptual design; and case studies that are informative and provide constructive insight in applying the principles and techniques to other types of mitigation work. Three panelists, however, cited weaknesses in the Corps’ guidance for this stage. Among the weaknesses cited were that practitioners will need more specific technical guidance because the guidance addresses various administrative attributes of the process rather than the mitigation aspects of projects; the guidance includes a few examples of projects but should include more and the examples should be discussed in greater detail; the guidance sets out how the design criteria should be applied, but is sometimes confusing and overly complex in its presentation; or the guidance lacks specific technical information other than to address simple hydrology and soil factors. Two panelists commented about the Regulatory Program’s new guidance when reflecting on this stage. One panelist indicated that the new guidance strengthened existing guidance by emphasizing the need to integrate buffer zone design into the plans. Another panelist indicated that the new approach cited in the guidance potentially provides a better mechanism for designing a successful replacement project; that the new guidance begins to set out the user-friendly, step-by-step set of instructions that previously had been missing; and that the new guidance makes the design stage guidance clearer and broader in at least some respects and clarifies some of the previous vagueness. While basically complimenting the new guidance, however, this panelist indicated that in other respects, the new guidance is not an improvement because it provides more flexibility than it should—for example, awarding credits for preservation—that does not serve to fulfill the goal of replacing lost functions and values. Panelists cited some strengths in the Highway Program’s guidance for this stage. One panelist for example, indicated that the guidance was the most complete, current and clear guidance on the design of mitigation projects, although it tended to include too much emphasis on wetland banking. Further, this panelist indicated that the guidance provides excellent detail and good examples of the design stage, offers several alternative methods to assess wetland functions, and provides an excellent tool for learning how to design and construct wetland replacement projects. Another panelist indicated that while the guidance primarily relates to wetlands, it is very user-friendly both in information and format; the overall process is clear, logical, and comprehensive; from a viability perspective, the guidance is more effective because it links the need to replace lost functions at each step in the process; and the discussion on applying a cost analysis is more instructive and easier to apply than the Corps’ guidance. Yet, this panelist also indicated that the guidance suffers from the absence of information gained and lessons learned over the last 5 years and that some of the guidance conflicts with other documents. Finally, a third panelist indicated that the guidance provided good information to help in developing mitigation designs, promoted site analysis, and included sound and consistent logic for investigating site characteristics needed for sustaining wetlands. However, this panelist also indicated that information on evaluating mitigation designs, technical guidance, and standards for measuring success is missing. The construction stage includes land acquisition as well as all activity on the site until the mitigation project is turned over to the nonfederal sponsor. Construction activities include building structures, creating habitat, and introducing animal and plant material. Overall, the ratings for the construction stage were the highest among the five stages. In general, the two key technical guidance documents—the Corps of Engineers’ Wetlands Engineering Handbook and the Transportation Resources Board’s Guidelines for the Development of Wetland Replacement Areas—received compliments from the panel. In particular, one panelist indicated that although these guides address primarily wetlands, the two documents together provide a significant body of technical information. Another panelist indicated that the guidance is relatively current and on target with the best professional knowledge in the area of wetland replacement construction. Further, this panelist indicated that the guidance provides good, specific information on a broad range of features including construction of water control structures, soils, how to ensure proper hydrology, and the sequence of construction. However, this panelist pointed out that the guidance does not discuss what happens after construction or how to do site acquisition. Where panelists commented extensively about the Wetlands Engineering Handbook, very few additional comments about the strengths and weaknesses of the Civil Works Program’s guidance were given. One panelist did indicate, however, that the overall materials are not always current with appropriate techniques, while another panelist pointed out that the guidance lacks information about buildability and constructive construction and that the Planning Guidance Notebook lacks helpful information other than rough guidelines on timing. Panelists generally limited their comments about the Regulatory Program’s guidance. Two panelists commented on the program’s new guidance. Specifically, one panelist indicated that the new guidance added information related to the timing in the construction stage. Another panelist indicated that the new guidance potentially provides an effective mechanism for guiding the design and construction of a mitigation project with the analysis set forth in an organized set of procedures for guiding construction steps. In addition, this panelist indicated that the new guidance goes a long way toward establishing performance standards. In considering the Regulatory Program’s guidance, this panelist indicated that the amount of information was overwhelming and unnecessary unless it is meant to serve as a general primer to anyone with marginal expertise on how to create or restore a wetland. Further, this panelist indicated that “its use as a resource tool is limited because too much information must be digested in order to get an answer or specific guidance.” Regarding the Highway Program’s guidance, one panelist indicated that the program has superior guidance because of the Guidelines for the Development of Wetland Replacement Areas. According to the panelist, this document provides additional specifications and step-by-step guidance on wetland construction, over and above the relatively comprehensive construction details provided in the Wetlands Engineering Handbook. Another panelist indicated that the guidance is clear and well organized and specifically lists project construction techniques that work and those that have not. The panelist further indicated that the guidance provides an excellent list of plants that have been incorporated in successful compensatory mitigation sites. Finally, a third panelist indicated that the Highway Program’s guidance, as it does in all stages, makes it clear that competent professional decisions by experienced personnel will be used to answer questions and that this is not open to negotiations. Additionally, this panelist indicated that the Highway Program’s guidance is strong in all categories and easily understood. The monitoring stage includes periodic assessments of the mitigation site before, during, and after construction. A monitoring plan establishes the requirements for the periodic assessments, the extent of federal agency responsibility, and the applicability to others involved in the mitigation project. Overall, the ratings for the monitoring stage were the second lowest among the five stages. This rating reflects the panel’s general opinion that the guidance emphasizes the determination stage and to a lesser extent the other earlier stages of mitigation at the expense of the monitoring and evaluation stages. Yet, panelists indicated that overall, the guidance material addressing monitoring was reasonably well developed, and that since the programs use the same basic reference material, they do a fairly good job at addressing the issue. Three panelists specifically mentioned the Wetlands Engineering Handbook as providing a good reference for setting performance criteria and providing methods for sampling everything from soils, hydrology, and vegetation to birds, fish, and invertebrates. Several panelists, however, pointed out that the coverage of monitoring activities fails to provide sufficient rationale or detail to encourage this critical stage of the mitigation process; that the material does not require upward reporting of the results so that top agency management can monitor both project or program performance with regard to the degree of success of mitigation; or that no real specific guidance on site-specific design of a monitoring program exists in the guidance. Most panelists did not comment on the strengths of the Civil Works’ guidance, but several panelists noted shortcomings in the guidance. For example, one panelist mentioned that the guidance is technically sound but addresses only wetlands for the most part and is not programmatically helpful because it lacks the details on who should do what with the reports. Two other panelists indicated that the monitoring discussions focused too much on the cost considerations of monitoring, while another panelist indicated that the guidance downplays the need for monitoring and that the guidance tends to be dated. Regarding the Regulatory Program’s guidance, panelists provided limited comments. In commenting on the October 2001 guidance, three panelists indicated that it provides a stronger emphasis on the importance of monitoring, but “permanent” monitoring is not required; this guidance is more explicit that monitoring should be included as a permit condition; or it authorized the extension of the monitoring period where appropriate. Two panelists commented that the Regulatory Program’s guidance on monitoring needed to be strengthened if continued effectiveness of even state-of-the-art mitigation plans were to be ensured or that the guidance has some useful components but fails to provide any type of standardized approach. Panelists indicated that the Highway Program’s technical publications give extensive treatment of monitoring as an essential element of successful mitigation; the materials were considered excellent as they were complete, relatively current, clear, and understandable; some of the information provides a very good overview of monitoring and outlines strategies for defining success; or the information is clearly presented and provides enough technical information to be informative without being too overly technical. One panelist, however, indicated that some of the guidance could be adapted to address fish and wildlife and upland habitats but does not do so, while another panelist indicated that the guidance provides an overview of what to monitor, but does not provide any additional specifics. The evaluation stage includes three elements: (1) determining the overall effectiveness and success of the mitigation project; (2) determining what to do if a project is shown by the monitoring program, or otherwise, not to be a complete success; and (3) determining the implications for improving future mitigation projects. Overall, the ratings for the evaluation stage were the lowest among the five stages. Panelists considered three separate aspects of project evaluation— success of the project, capacity to take corrective action on an unsuccessful project, and ability to make changes in future projects. From a positive perspective, various panelists thought the guidance emphasized why performance criteria are needed and who is responsible for the assessment, or these panelists said the guidance contained helpful examples of performance standards. However, several panelists generally felt the guidance was weak among all three agencies. Two panelists thought the Highway Program’s guidance was good, but the guidance for the Corps’ two programs was not useful in any way. Panel members disagreed over whether the Corps’ new Regulatory Program’s guidance made significant improvements in the evaluation stage. Regarding the Civil Works Program’s guidance, various panelists identified the following as strengths of the guidance: it is current and reflects the latest technical knowledge; it talks about how to develop performance standards for a particular site; and it includes very pertinent information regarding wetland monitoring and evaluating success criteria. Various panelists cited weaknesses of the guidance in that evaluation is given short shrift, is not addressed in a useful way, or lacked much discussion on specifics. Among specific weaknesses cited by the panelists were: the guidance provides little or no encouragement or support for continuing evaluation and correction of individual Civil Works project performance deficiencies or in developing additional guidance based on lessons learned from completed projects; the guidance does not encourage Corps’ offices to undertake routine or systematic evaluations of existing project performance with the intent of either identifying on-going performance deficiencies or providing “lessons learned” to assist in the planning of mitigation or other project features; the guidance includes a laundry list of factors to measure but does not require corrective actions nor does it establish a feedback mechanism; the guidance does not include impacts on natural systems in surrounding land and water areas as part of the evaluation; and the guidance does not contain much discussion on the role and responsibilities of various parties. Regarding the strengths of the Regulatory Program’s guidance, one panelist said it gives extensive technical guidance for developing evaluation criteria. Two panelists said the recent Regulatory Program’s guidance enhances the evaluation stage guidance. One of those panelists said that the guidance attempts to provide more definition to the components of evaluating the effectiveness and success of mitigation, and the other panelist said that the guidance potentially provides an effective means of tracking project success. Regarding weaknesses in the Regulatory Program’s guidance, various panelists said the guidance basically does not address evaluation in any useful way or continues to need strengthening with regard to the evaluation stage. According to one of these panelists, while the Corps talks about evaluation, it has “not provided any method to use such evaluation in corrective actions either on the current or future mitigation projects.” Regarding the Highway Program’s guidance, various panelists identified the following strengths: it includes a recommended assessment method; it addresses fixing problems in mitigation efforts; it allows for extending the monitoring period if the project’s goals have not been achieved at the completion of the established period; it provides funding for additional restoration activities if needed; it recommends maintenance for 3 to 5 years or longer to ensure the project’s success; it recommends a liberal budget for expected and unexpected maintenance costs at 2 to 3 percent of budget in reserve; or it makes the effort to see that learning is incorporated into future efforts and to “fix” projects that were not successful. Two panelists, however, said that the guidance is silent in project evaluation or that evaluation is not covered in any significant degree. In this assessment, panelists were to consider whether the three programs’ guidance included, for example, designation of tasks and responsibilities; ranges of mitigation alternatives; examples and cross-references; discussions of quality control, feedback, and reporting; or measures of success. For four of the five mitigation stages, panelists’ average ratings for completeness were higher for the Highway Program’s guidance than either of the Corps’ two programs’ guidance. Several panelists commented that the collective guidance emphasized wetlands replacement too heavily at the expense of considering other habitats that support fish and wildlife. One panelist reflected that while none of the programs did a good job of defining the amount of mitigation required, the Highway Program’s guidance was the most detailed, written in a user-friendly, step-by-step fashion, and the Civil Works’ guidance did not provide many technical details and emphasized cost over the evaluation of success. Another panelist indicated that the Highway Program’s guidance was more complete because it set forth the range of circumstances when fish and wildlife impacts should be mitigated and its evaluation guidance specifically allows for the extension of the monitoring period if the project’s goals have not been achieved. In contrast, this panelist indicated that the Civil Works’ guidance falls short of identifying who should do the monitoring, who should receive the monitoring reports, and who should bear the cost of additional or off-site monitoring; and finally, that the guidance does not include impacts on natural systems in surrounding land and water areas. In discussing the Corps’ Regulatory Program’s recently issued guidance, one panelist indicated that the new guidance enhances the existing guidance particularly in the areas of determination and evaluation as there is a new emphasis on the ecosystem approach to mitigation. The panelist further stated that the new guidance gives more criteria for determining compensation ratios and details the components of a compensatory mitigation plan and success criteria to evaluate its success. In this assessment, panelists were to consider whether the three programs’ guidance reflected current laws and regulations and up-to-date technical knowledge. In general, the panelists did not provide many comments related to the currency of the three programs’ guidance and with the exception of the construction stage, panelists rated the currency of the guidance similarly among the three programs. One panelist specifically noted this similarity among the currency of the three programs’ guidance while another panelist did not feel as if any of the programs presented a complete, current picture of the entire process of determining what type of mitigation is needed, designing and constructing the site, and then monitoring and evaluating the project’s success. The panelist further indicated that all three programs rely on a basic set of policy guidance that may not be up-to-date with current thinking about wetland replacement, and the programs rely on technical guidance that is not always in tune with current thinking. Finally, another panelist indicated that none of the programs’ guidance is as up-to-date as they might be on the effectiveness of mitigation efforts and that much of the material is dated, and while still conceptually good, it does not address current techniques related to mitigation in many instances. Three panelists commented that the Regulatory Program’s new guidance overall contributes to currency in that it is more in line with current technical findings by, among other things, including an ecologically based success criteria. In this assessment, panelists were to consider whether the three programs’ guidance was clear on duties, responsibilities, distinction between what is required and what are discretionary actions, and whether it was logically organized. Panelists provided a lower average rating for the Civil Works Program’s guidance than the Corps’ Regulatory Program’s guidance and the Highway Program’s guidance for all five mitigation stages as it relates to clarity. One panelist indicated that the Corps’ Civil Works’ guidance was the clearest as it related to the detail for determining the need for mitigation, while the Highway Program’s guidance was the most clear as it relates to the design of mitigation projects, and the guidance is relatively current and understandable related to the monitoring stage. Another panelist, however, indicated that the Civil Works’ guidance is compromised by the less-than-clear inclusion of cost considerations, while the Highway Program’s guidance includes explicit guidance on evaluation, including a recommended assessment method. Another panelist criticized the Civil Works’ guidance as providing a general listing of what will be required and the procedures for making the determination, but falling short of providing a clear explanation of the process. Also, related to monitoring, this panelist said that the Civil Works’ guidance does not make clear who should do the monitoring, who should receive the reports, or who bears the cost of any additional or modified monitoring. Conversely, this panelist indicated that the Highway Program’s guidance more effectively conveys the information necessary to fully understand the process for determining whether compensatory mitigation is required and if so, how much. While most of the panelists indicated that the Regulatory Program’s new guidance contributed to the overall clarity, one panelist indicated that the new guidance was clearer than other Regulatory guidance, but did not improve the body of material significantly and raised additional confusion. The panelist indicated that the confusion arose because certain sections of the new guidance were poorly written and difficult to interpret. In this assessment, panelists were to consider whether the guidance for the three programs was broad in its subject matter coverage. Panelists considered the breadth of the three programs’ guidance as it related to the scope of the mitigation impacts and whether hydrology, vegetation, fish and wildlife species, adjacent lands, and wetlands were addressed. For this attribute, the panelists’ average ratings were higher for the Highway Program’s guidance than the Civil Works’ guidance. One panelist, however, criticized all three programs’ guidance as not being particularly broad because they cover only wetland habitat and not adjacent uplands and the guidance focuses more on restoration of hydrology and vegetation than direct design elements to deal with the loss of fish and wildlife species. This same panelist indicated that two of the guidance documents provide good specific information on a broad range of features, including construction of water control structures, soils, how to ensure proper hydrology, and the sequencing of construction. One panelist indicated that the Corps’ Regulatory Program’s new guidance better explains when off-site mitigation is appropriate and that it ensures that the compensatory mitigation project will include design elements that deal with the entire ecosystem. In this assessment, panelists were to consider whether the guidance for the three programs presents sufficient information to best ensure the success of the project. Panelists considered whether the guidance addressed the long-term viability of the ecosystem, for example the survivability of natural and man-made systems into the future. Assessing the viability attribute resulted in the widest variance in the ratings among the panelists. For two stages—determination and evaluation—most of the panelists rated the Civil Works Program’s guidance lower than the Highway Program’s guidance. For two other stages—design and monitoring—most panelists rated the two programs’ guidance the same. For the remaining stage—construction—an equal number of panelists rated the Civil Works Program’s guidance the same as or lower than the Highway Program’s guidance. One panelist indicated that the guidance for the evaluation stage for all three programs does not provide confidence that completed projects will successfully meet their performance objectives. Panelists’ narrative comments generally did not include comments for weaknesses in the Civil Works’ guidance and strengths in the Highway Program’s guidance. One panelist, however, criticized the Civil Works’ guidance because it contained no requirement to reconsider the proposed project if the compensatory mitigation project is not likely to succeed, that the information in the guidance is unlikely to lead to the replacement of habitat losses in at least some instances, and that the guidance does not consider the impact of the mitigation project on adjacent lands. Conversely, this panelist indicated that the Highway Program’s guidance was more effective because it calls for a compensatory mitigation ratio of 1.5 to 1, it clearly states that no net loss goal applies only to wetlands, and the guidance allows funding for the establishment period to increase the likelihood of project success. One panelist indicated that the new Regulatory Program’s guidance provided more definitive instructions on how to determine mitigation ratios and types of mitigation and addressed the long-term viability of mitigation through establishing success criteria while another panelist pointed out that strengthening the financial assurances requirements also will improve a project’s chance for long-term success. Robert P. Brooks, Ph.D. Director, Penn State Cooperative Wetlands Center and Professor of Wildlife and Wetlands, Penn State University G. Edward Dickey, Ph.D. Senior Adviser, Dawson and Associates and Cassidy and Associates Ellen Gilinsky, Ph.D. Manager, Virginia Water Protection Permit Program, Virginia Department of Environmental Quality Carl Hershner, Ph.D. Director, Center for Coastal Resources Management, and Associate Professor, School of Marine Science, Virginia Institute of Marine Science Robert G. Hoyt, Esq. Principal and Founding Partner, EcoLogix Group, Inc. Alan Wentz, Ph.D. Group Manager for Conservation Programs, Ducks Unlimited, Inc. In addition to the above, Nancy S. Bowser, James M. Fields, H. Brandon Haller, and Rosellen McCarthy made key contributions to this report.
The U.S. Army Corps of Engineers must mitigate potential damage to fish and wildlife caused by dam construction, harbor dredging, and other projects. In the past, the Corps has acquired lands to replace lost habitat, created wetlands, or planted vegetation to stabilize soil and prevent erosion. The Corps' Civil Works Program deals with commercial navigation and flood damage, while its Regulatory Program oversees privately financed projects that affect water and land resources. According to Corps engineers, 28 of the 47 water resources projects authorized since enactment of the Water Resources Development Act of 1986 required a fish and wildlife mitigation plan. For projects that received funding, less than half of the mitigation was completed before construction began. Of the remaining 19 projects, seven completed at least half of mitigation before initiating construction; two had not started construction but had done some mitigation; and 10 had not begun construction or mitigation. As of September 2001, 16 of the 34 projects where construction had begun had completed all of the mitigation. A panel of scientific experts similarly rated the overall quality of the national fish and wildlife mitigation by the Corps and the Federal Highway Administration's Federal-aid Highway Program. Although some panelists commended the program guidance for its clarity, currency, and inclusion of ample technical guidance, other panelists were more critical, noting that the guidance emphasized the determination and design stages to the detriment of the monitoring and evaluation stages, emphasized wetlands to the detriment of other lands, or failed to require corrective actions when projects did not succeed. On the basis of the guidance alone, panelists expressed concerns about estimating the success of the mitigation projects in restoring the natural hydrology and native vegetation and in supporting native fish and wildlife species. Furthermore, factors other than guidance affect mitigation projects, such as major storms that are difficult to control, or invasive weeds or wildlife species that unexpectedly dominate the site.